` | receiver balance before | receiver balance after |
+
+Numeric values are normalized into integer witness units. A value is
+rejected for FastPQ batching if it cannot be represented as a non-negative
+`u64` at the selected decimal scale.
+
+## Public Inputs
+
+Every FastPQ transition batch carries public inputs that bind the proof to
+the block and execution context:
+
+| Input | Meaning |
+| ------------- | --------------------------------------------------------------- |
+| `dsid` | Dataspace identifier encoded as little-endian bytes |
+| `slot` | Block creation time converted to nanoseconds |
+| `old_root` | Parent state root derived from the execution witness |
+| `new_root` | Post-state root derived from the execution witness |
+| `perm_root` | Poseidon commitment over active role permissions |
+| `tx_set_hash` | Hash over sorted transaction and time-trigger entrypoint hashes |
+
+The host uses `fastpq-lane-balanced` as the canonical parameter set for
+these batches.
+
+## Mathematical Model
+
+This section describes the arithmetic implemented by the current Rust
+prover and verifier. All field operations below are over the Goldilocks
+prime field:
+
+$$
+F = \mathbb{F}_p,\qquad p = 2^{64} - 2^{32} + 1
+$$
+
+FastPQ uses Poseidon2 over `F` for field commitments. The sponge has width
+`t = 3`, rate `r = 2`, and capacity `1`. The hash absorbs field elements in
+rate-2 blocks and appends a single field element `1` before the final
+permutation:
+
+$$
+H_F(x_0,\ldots,x_{m-1}) =
+\operatorname{Poseidon2}_F(x_0,\ldots,x_{m-1},1)
+$$
+
+Byte strings are packed into 7-byte little-endian limbs so every limb is
+strictly below `p`:
+
+$$
+\operatorname{pack}(b)_j =
+\sum_{i=0}^{6} b_{7j+i}2^{8i},\qquad 0 \leq \operatorname{pack}(b)_j < p
+$$
+
+Domain-separated field hashes are represented as:
+
+$$
+H_D(m) =
+H_F(
+|\operatorname{pack}(D)|,\operatorname{pack}(D),
+|\operatorname{pack}(m)|,\operatorname{pack}(m)
+)
+$$
+
+For hashes that start from byte-domain digests, FastPQ maps the first eight
+little-endian bytes into the field:
+
+$$
+\operatorname{seed}(D)=
+\operatorname{le64}(\operatorname{Hash}(D)[0..8])\bmod p
+$$
+
+Here `Hash` means Iroha's `iroha_crypto::Hash::new`, a 32-byte Blake2bVar
+digest, unless a formula explicitly names Poseidon2 or SHA-256.
+
+### Field Arithmetic
+
+The Rust code represents field elements as canonical `u64` values in
+`[0,p)`. Addition and subtraction are:
+
+$$
+a +_F b = (a+b)\bmod p
+$$
+
+$$
+a -_F b = (a-b)\bmod p
+$$
+
+Multiplication first computes the 128-bit product:
+
+$$
+a\cdot b = \operatorname{lo} + 2^{64}\operatorname{hi}
+$$
+
+Goldilocks reduction then uses the identity:
+
+$$
+2^{64}\equiv2^{32}-1\pmod p
+$$
+
+If:
+
+$$
+\operatorname{hi}=\operatorname{hi}_{lo}+2^{32}\operatorname{hi}_{hi}
+$$
+
+then the reducer computes:
+
+$$
+\operatorname{lo}
++2^{32}\operatorname{hi}_{lo}
+-\operatorname{hi}_{lo}
+-\operatorname{hi}_{hi}
+\pmod p
+$$
+
+The implementation conditionally adds or subtracts `p` until the result is
+canonical. Signed integers, such as balance deltas, are embedded by:
+
+$$
+\operatorname{field}(x)=x\bmod p,\qquad 0\leq\operatorname{field}(x)= 0` and `q <= s`. Its FastPQ witness value is:
+
+$$
+\operatorname{norm}_s(m,q)=m\cdot10^{s-q}
+$$
+
+The normalized result must fit in `u64`.
+
+### Canonical Ordering
+
+Before trace construction, the batch is sorted by transition key, operation
+rank, and original insertion index:
+
+$$
+r(\operatorname{Transfer})=0,\quad
+r(\operatorname{Mint})=1,\quad
+r(\operatorname{Burn})=2,\quad
+r(\operatorname{RoleGrant})=3,\quad
+r(\operatorname{RoleRevoke})=4,\quad
+r(\operatorname{MetaSet})=5
+$$
+
+The ordering commitment is a Poseidon2 field hash over the domain
+`fastpq:v1:ordering` and the Norito encoding of the sorted transitions:
+
+$$
+\operatorname{ordering\_hash} =
+H_F(
+|P(D_o)|,P(D_o),|P(E(T^\star))|,P(E(T^\star))
+)
+$$
+
+where `P` is 7-byte packing, `E` is Norito encoding, `D_o` is
+`fastpq:v1:ordering`, and `T*` is the sorted transition list.
+
+### Transfer Equations
+
+For a transfer amount `a`, sender balance `f`, and receiver balance `t`,
+FastPQ validates the normalized witness values before building the trace:
+
+$$
+f_0 \geq a
+$$
+
+$$
+f_1 = f_0 - a
+$$
+
+$$
+t_1 = t_0 + a
+$$
+
+The transition rows then encode:
+
+$$
+\Delta_{\text{sender}} = f_1 - f_0 = -a
+$$
+
+$$
+\Delta_{\text{receiver}} = t_1 - t_0 = a
+$$
+
+Inside the trace, signed deltas are reduced into `F`:
+
+$$
+\delta_i = (\operatorname{post}_i - \operatorname{pre}_i)\bmod p
+$$
+
+The optional single-delta transfer digest commits the encoded transfer
+preimage:
+
+$$
+d_{\text{transfer}} =
+\operatorname{PoseidonHashBytes}(
+E(\text{from})\|E(\text{to})\|E(\text{asset})\|E(a)\|\text{batch\_hash}
+)
+$$
+
+For multi-delta transfer transcripts, the current code requires this
+top-level digest to be absent until per-delta digest plumbing is available.
+
+The host authority digest for transfer transcripts is:
+
+$$
+d_{\text{authority}} =
+\operatorname{Hash}(\texttt{iroha:fastpq:v1:authority|}\|E(\text{authority\_account}))
+$$
+
+### Trace Rows
+
+Let the sorted transition list contain `n` real rows. The trace length is
+the next power of two:
+
+$$
+N = 2^{\lceil\log_2(\max(1,n))\rceil}
+$$
+
+Rows `0..n-1` are active; rows `n..N-1` are padding rows. Each real row has
+one operation selector set:
+
+$$
+s_{\text{active}} =
+s_{\text{transfer}}+
+s_{\text{mint}}+
+s_{\text{burn}}+
+s_{\text{role\_grant}}+
+s_{\text{role\_revoke}}+
+s_{\text{meta\_set}}
+$$
+
+All selector columns are Boolean:
+
+$$
+s(s-1)=0
+$$
+
+Permission lookup rows are exactly role grant and role revoke rows:
+
+$$
+s_{\text{perm}} =
+s_{\text{role\_grant}} + s_{\text{role\_revoke}}
+$$
+
+For numeric operation rows:
+
+$$
+\delta_i = \operatorname{value\_new}_{i,0} - \operatorname{value\_old}_{i,0}
+$$
+
+The builder also tracks running per-asset deltas:
+
+$$
+R_i(a)=R_{i-1}(a)+\delta_i
+\quad\text{for transfer, mint, and burn rows of asset }a
+$$
+
+Only mint and burn rows update the supply counter:
+
+$$
+S_i(a)=S_{i-1}(a)+
+\begin{cases}
+\delta_i,& \text{if row }i\text{ is mint or burn}\\
+0,& \text{otherwise}
+\end{cases}
+$$
+
+Metadata and dataspace trace columns are field hashes derived before row
+materialization:
+
+$$
+\operatorname{metadata\_hash} =
+\begin{cases}
+0,& \text{if metadata is empty}\\
+H_D(E(\text{metadata})),& \text{otherwise}
+\end{cases}
+$$
+
+$$
+\operatorname{dsid\_trace}=H_D(\operatorname{public\_input\_dsid})
+$$
+
+The metadata hash, dataspace hash, and slot are stable across adjacent
+trace rows:
+
+$$
+\operatorname{metadata\_hash}_i=\operatorname{metadata\_hash}_{i+1}
+$$
+
+$$
+\operatorname{dsid}_i=\operatorname{dsid}_{i+1}
+$$
+
+$$
+\operatorname{slot}_i=\operatorname{slot}_{i+1}
+$$
+
+### Transfer Merkle Columns
+
+Transfer rows carry a 32-level sparse Merkle path. If a host proof is
+missing, the prover synthesizes a deterministic path from the row key,
+pre-balance, and whether the row is the sender or receiver side.
+
+For synthetic paths, the flavor salt is `fastpq:smt:from` for sender rows
+and `fastpq:smt:to` for receiver rows:
+
+$$
+K =
+\operatorname{Hash}(\texttt{fastpq:smt:key|}\|\operatorname{salt}\|\operatorname{key})
+$$
+
+$$
+V =
+\operatorname{Hash}(\texttt{fastpq:smt:value|}\|\operatorname{salt}\|\operatorname{le64}(\operatorname{balance}))
+$$
+
+$$
+b_\ell = \operatorname{bit}_\ell(K)
+$$
+
+$$
+s_\ell =
+\operatorname{Hash}(
+\texttt{fastpq:smt:sibling|}\|
+\operatorname{le64}(\ell)\|K\|\operatorname{le64}(\operatorname{balance})\|\operatorname{salt}
+)
+$$
+
+The synthetic leaf and internal nodes are:
+
+$$
+L = \operatorname{Hash}(
+\texttt{fastpq:smt:leaf|}\|
+K\|V
+)
+$$
+
+$$
+N_{\ell+1} =
+\operatorname{Hash}(
+\texttt{fastpq:smt:node|}\|
+\operatorname{left}_\ell\|
+\operatorname{right}_\ell
+)
+$$
+
+The trace records the bit `b_l`, sibling `s_l`, input node `x_l`, and
+output node `x_{l+1}` at every level. With the code's branch convention:
+
+$$
+(\operatorname{left}_\ell,\operatorname{right}_\ell)=
+\begin{cases}
+(s_\ell,x_\ell),& b_\ell=0\\
+(x_\ell,s_\ell),& b_\ell=1
+\end{cases}
+$$
+
+### Permission Hashes
+
+Role grant and revoke rows hash the permission witness:
+
+$$
+h_{\text{perm}} =
+H_F(P(\operatorname{role\_id}\|\operatorname{permission\_id}\|\operatorname{epoch}_{le}))
+$$
+
+The host permission table root sorts entries by role bytes, permission
+bytes, and epoch bytes, then builds a Poseidon2 Merkle tree:
+
+$$
+M_0[j]=h_{\text{perm},j}
+$$
+
+$$
+M_{k+1}[j] =
+H_F(\operatorname{seed}(\texttt{fastpq:v1:poseidon\_node}),M_k[2j],M_k[2j+1])
+$$
+
+Odd-width levels duplicate the final element.
+
+### Trace Commitment
+
+For each trace column `c`, FastPQ first interpolates the column values over
+the trace domain and hashes the coefficient vector:
+
+$$
+C_c =
+H_F(
+\operatorname{seed}(\texttt{fastpq:v1:trace:column:}c),
+\operatorname{coeffs}(c)
+)
+$$
+
+The trace root is a Poseidon2 Merkle root over column commitments:
+
+$$
+R_{\text{trace}} = \operatorname{MerkleRoot}(C_0,\ldots,C_{m-1})
+$$
+
+The final trace commitment is a byte hash over the domain, parameter set,
+trace shape, column digests, and trace root:
+
+$$
+\operatorname{commitment} =
+\operatorname{Hash}(
+\operatorname{len}(D_c)\|D_c\|
+\operatorname{len}(\text{parameter})\|\text{parameter}\|
+n\|N\|m\|C_0\|\cdots\|C_{m-1}\|R_{\text{trace}}
+)
+$$
+
+where `D_c` is `fastpq:v1:trace_commitment`.
+
+### AIR Composition
+
+The V1 AIR composition value is a linear combination of row-local residues.
+The transcript samples two challenges:
+
+$$
+\alpha_0,\alpha_1 \in F
+$$
+
+For each adjacent row pair `(i,i+1)`, the prover computes:
+
+$$
+A_i=\sum_j \alpha_{j\bmod2}\rho_{i,j}
+$$
+
+The residues `rho` are, in code order:
+
+$$
+\rho=s(s-1)
+\quad\text{for each selector column}
+$$
+
+$$
+\rho =
+s_{\text{active}} -
+(s_{\text{transfer}}+s_{\text{mint}}+s_{\text{burn}}+
+s_{\text{role\_grant}}+s_{\text{role\_revoke}}+s_{\text{meta\_set}})
+$$
+
+$$
+\rho =
+s_{\text{perm}}-(s_{\text{role\_grant}}+s_{\text{role\_revoke}})
+$$
+
+$$
+\rho =
+s_{\text{active},i+1}(1-s_{\text{active},i})
+$$
+
+For rows with numeric columns:
+
+$$
+\rho =
+(s_{\text{transfer}}+s_{\text{mint}}+s_{\text{burn}})
+\cdot
+((\operatorname{value\_new}_{0}-\operatorname{value\_old}_{0})-\delta)
+$$
+
+And for stable batch context columns:
+
+$$
+\rho =
+\operatorname{metadata\_hash}_i-\operatorname{metadata\_hash}_{i+1}
+$$
+
+$$
+\rho =
+\operatorname{dsid}_i-\operatorname{dsid}_{i+1}
+$$
+
+$$
+\rho =
+\operatorname{slot}_i-\operatorname{slot}_{i+1}
+$$
+
+The verifier recomputes `A_i` for sampled row openings and checks it
+against the composition value committed under the AIR composition Merkle
+root.
+
+### Lookup Product
+
+The permission lookup accumulator uses the Fiat-Shamir challenge `gamma`.
+Over the low-degree extension evaluations of `s_perm` and `perm_hash`, the
+running product is:
+
+$$
+z_0=1
+$$
+
+$$
+z_{i+1}=
+\begin{cases}
+z_i\cdot(w_i+\gamma),& s_{\text{perm},i}\ne0\\
+z_i,& s_{\text{perm},i}=0
+\end{cases}
+$$
+
+The proof records:
+
+$$
+\operatorname{lookup\_grand\_product}=H_F(z_0,z_1,\ldots)
+$$
+
+### Low-Degree Extension
+
+Let `omega_T` be the trace-domain generator, `omega_E` the
+evaluation-domain generator, and `g` the configured coset offset. For a
+trace column with values `v_i`, interpolation produces coefficients `a_j`
+such that:
+
+$$
+f(\omega_T^i)=v_i
+$$
+
+The low-degree extension evaluates the same polynomial on the coset:
+
+$$
+\operatorname{LDE}_f(i)=f(g\cdot\omega_E^i)
+$$
+
+The implementation computes this by multiplying coefficients by powers of
+the coset offset before FFT:
+
+$$
+a'_j = a_j g^j
+$$
+
+and then evaluating `a'` on the evaluation domain.
+
+The CPU FFT is an iterative radix-2 Cooley-Tukey transform over
+bit-reversed inputs. At stage length `L`, half length `H=L/2`, and stage
+root:
+
+$$
+\omega_L=\omega^{N/L}
+$$
+
+each butterfly computes:
+
+$$
+u=x_j
+$$
+
+$$
+v=x_{j+H}\cdot\omega_L^j
+$$
+
+$$
+x_j'=u+v,\qquad x_{j+H}'=u-v
+$$
+
+The inverse FFT runs the same transform with `omega^{-1}` and scales by the
+inverse domain size:
+
+$$
+\operatorname{IFFT}(x)=N^{-1}\cdot\operatorname{FFT}_{\omega^{-1}}(x)
+$$
+
+Catalogue roots are validated before use:
+
+$$
+\omega^{2^k}=1
+$$
+
+$$
+\omega^{2^{k-1}}\ne1\qquad(k>0)
+$$
+
+For smaller domains derived from the catalogue root, the generator is:
+
+$$
+\omega_{\ell}=\omega_{\max}^{2^{k_{\max}-\ell}}
+$$
+
+### Row and Leaf Hashes
+
+After LDE, FastPQ hashes each row across all LDE columns. For `m` columns:
+
+$$
+r_i =
+H_F(i,m,x_{i,0},x_{i,1},\ldots,x_{i,m-1})
+$$
+
+If row hashes are still on the trace domain rather than the evaluation
+domain, the prover interpolates and extends that single row-hash column
+with the same coset LDE process.
+
+### Merkle Openings
+
+LDE values are grouped into chunks of:
+
+$$
+B_{\text{lde}}=8\cdot\operatorname{fri\_arity}
+$$
+
+Each chunk leaf is:
+
+$$
+L_j=H_D(j\|v_{jB}\|\cdots\|v_{jB+B-1})
+$$
+
+Merkle parents are:
+
+$$
+P_j =
+H_F(\operatorname{seed}(\texttt{fastpq:v1:trace:node}),L_{2j},L_{2j+1})
+$$
+
+Odd levels duplicate the last node. Query paths verify by hashing left or
+right according to the query leaf index parity at each level.
+
+For a leaf at index `i`, a path `(s_0,\ldots,s_{d-1})` verifies against
+root `R` by the recurrence:
+
+$$
+y_0=L_i
+$$
+
+$$
+y_{k+1}=
+\begin{cases}
+H_F(\operatorname{seed}(\texttt{fastpq:v1:trace:node}),y_k,s_k),
+& \lfloor i/2^k\rfloor \equiv 0 \pmod 2\\
+H_F(\operatorname{seed}(\texttt{fastpq:v1:trace:node}),s_k,y_k),
+& \lfloor i/2^k\rfloor \equiv 1 \pmod 2
+\end{cases}
+$$
+
+The check passes only when:
+
+$$
+y_d=R
+$$
+
+AIR trace row leaves are:
+
+$$
+L^{\text{air}}_i =
+H_D(i\|m\|x_{i,0}\|\cdots\|x_{i,m-1})
+$$
+
+AIR composition leaves are:
+
+$$
+L^{\text{comp}}_i = H_D(i\|A_i)
+$$
+
+The LDE query opening also checks that the value opened at evaluation index
+`i` is present in its authenticated chunk:
+
+$$
+\operatorname{chunk\_index}=\left\lfloor\frac{i}{B_{\text{lde}}}\right\rfloor
+$$
+
+$$
+\operatorname{chunk\_offset}=i\bmod B_{\text{lde}}
+$$
+
+$$
+\operatorname{chunk}[\operatorname{chunk\_offset}]=v_i
+$$
+
+### FRI Folding
+
+FRI commits to AIR composition evaluations. For each round `l`, the
+transcript samples a challenge `beta_l`. The layer is padded to a multiple
+of the arity by repeating the last value. Each arity-sized group folds to:
+
+$$
+y_{l+1,j} =
+\sum_{k=0}^{a-1} y_{l,ja+k}\beta_l^k
+$$
+
+where `a` is the FRI arity. The verifier checks, for every sampled query
+chain, that:
+
+$$
+y_{l+1,\lfloor i/a\rfloor}
+=
+\sum_{k=0}^{a-1} y_{l,\lfloor i/a\rfloor a+k}\beta_l^k
+$$
+
+and authenticates each opened FRI group against the corresponding FRI layer
+root.
+
+### Fiat-Shamir Transcript
+
+The canonical parameter catalogue labels the transcript hash as SHA3-256.
+The current prover and verifier implementation derives challenge bytes with
+`iroha_crypto::Hash::new`, which is a 32-byte Blake2bVar digest, then
+reduces the first eight little-endian bytes into `F`:
+
+$$
+\chi(\text{tag}) =
+\operatorname{le64}(\operatorname{Hash}(\text{state}\|\operatorname{len}(\text{tag})\|\text{tag})[0..8])
+\bmod p
+$$
+
+Challenge calls append the full digest to the transcript state. The replay
+order is:
+
+1. public IO, protocol version, parameter version, and parameter name
+2. LDE root and trace root
+3. `gamma`
+4. AIR composition challenges `alpha_0`, `alpha_1`
+5. AIR trace root and AIR composition root
+6. lookup grand product
+7. FRI layer roots and `beta_l` challenges
+8. sampled query indices
+
+Query sampling keeps drawing 32-byte challenge digests and reading them as
+little-endian `u64` chunks until it has the requested number of unique
+indices:
+
+$$
+q = \operatorname{le64}(\text{digest chunk})\bmod N_{\text{eval}}
+$$
+
+The sampled set is returned in sorted order.
+
+### Verifier Replay
+
+The verifier first recomputes the batch commitment:
+
+$$
+\operatorname{commitment}_{expected}
+=\operatorname{trace\_commitment}(\operatorname{params},\operatorname{batch})
+$$
+
+and requires:
+
+$$
+\operatorname{commitment}_{expected}
+=\operatorname{proof.trace\_commitment}
+$$
+
+It also rebuilds public IO:
+
+$$
+\operatorname{PublicIO}=
+(\operatorname{dsid},\operatorname{slot},\operatorname{old\_root},
+\operatorname{new\_root},\operatorname{perm\_root},
+\operatorname{tx\_set\_hash},\operatorname{ordering\_hash},
+\operatorname{permission\_hashes})
+$$
+
+Every field must match the proof's public IO byte-for-byte. The verifier
+then reconstructs the same transcript and derives the same:
+
+$$
+\gamma,\quad \alpha_0,\alpha_1,\quad
+\beta_0,\ldots,\beta_{\ell-1},\quad
+q_0,\ldots,q_{t-1}
+$$
+
+For each sampled query `q`, it checks:
+
+$$
+\operatorname{MerkleVerify}(
+R_{\text{lde}},
+L_{\lfloor q/B_{\text{lde}}\rfloor},
+\lfloor q/B_{\text{lde}}\rfloor,
+\pi_{\text{lde}}
+)
+$$
+
+$$
+\operatorname{MerkleVerify}(
+R_{\text{air}},
+L^{\text{air}}_q,
+q,
+\pi_{\text{air,current}}
+)
+$$
+
+$$
+\operatorname{MerkleVerify}(
+R_{\text{air}},
+L^{\text{air}}_{q+1\bmod N_{\text{eval}}},
+q+1\bmod N_{\text{eval}},
+\pi_{\text{air,next}}
+)
+$$
+
+and:
+
+$$
+A_q =
+\operatorname{AIRComposition}(
+\operatorname{row}_q,\operatorname{row}_{q+1},\alpha_0,\alpha_1
+)
+$$
+
+The AIR composition opening must authenticate under `R_air_composition`.
+The FRI chain then starts from the same `A_q` and must end in an
+authenticated final FRI leaf under the terminal FRI root.
+
+## What The Prover Checks
+
+Before building the trace, the FastPQ prover canonicalizes the batch order
+by transition key, operation rank, and insertion order. Transfer rows also
+require transcript metadata. A batch with transfer rows but no transfer
+transcripts is invalid.
+
+For transfer transcripts, the prover-side checks include:
+
+- the sender balance must not underflow
+- `sender_after` must equal `sender_before - amount`
+- `receiver_after` must equal `receiver_before + amount`
+- the transcript must cover every transfer row in the batch
+- a single-delta Poseidon digest, when present, must match the transcript
+ preimage
+- provided sparse-Merkle proofs must decode as version 1; missing paths are
+ filled with deterministic synthetic proofs
+
+The trace contains selector columns for transfer, mint, burn, role grant,
+role revoke, metadata set, and permission lookup rows. Numeric operation
+rows also carry signed deltas, running per-asset deltas, and supply
+counters.
+
+## Prover Lane
+
+`irohad` starts the FastPQ prover lane at startup if the prover backend can
+be initialized. The lane is a background task with a bounded queue. After a
+block produces an execution witness, the commit path submits a prover job
+containing the block hash, height, view, and witness.
+
+If the lane is not running or the queue is full, the job is skipped and
+normal block processing continues. This means the background prover lane is
+not a transaction admission or consensus gate. It is a proof-production
+path over state that has already been executed.
+
+The lane constructs a prover with:
+
+```text
+parameter = "fastpq-lane-balanced"
+execution_mode = auto | cpu | gpu
+poseidon_mode = auto | cpu | gpu
+```
+
+`auto` lets the prover choose the available backend. `cpu` pins execution
+to the CPU. `gpu` prefers GPU execution, with CPU fallback where the
+backend cannot use the requested kernels.
+
+## Verification
+
+FastPQ proof verification rebuilds the canonical batch commitment and
+replays the public transcript. The verifier checks the protocol version,
+parameter-set version, replay limits, trace commitment, public inputs,
+sampled Merkle openings, AIR openings, and FRI query chain.
+
+Default replay limits include:
+
+| Limit | Default |
+| ------------------ | ------: |
+| Transition rows | 256 |
+| Batch payload size | 256 KiB |
+| FRI layers | 16 |
+| Query openings | 128 |
+
+## Nexus Verified Relays
+
+Nexus AXT proof envelopes can embed an `AxtFastpqBinding`. When
+`RegisterVerifiedLaneRelay` executes, Iroha:
+
+1. verifies the lane relay envelope and FastPQ proof material
+2. checks the dataspace and manifest root
+3. decodes the AXT proof envelope
+4. requires a `fastpq_binding`
+5. rebuilds the FastPQ batch from that binding
+6. decodes the embedded FastPQ proof
+7. calls the FastPQ verifier on the rebuilt batch and proof
+
+If verification succeeds, Iroha stores a `VerifiedLaneRelayRecord`
+containing the relay reference, original envelope, proof payload hash,
+verification height, manifest root, and FastPQ binding.
+
+Lane relay envelopes also carry compact FastPQ proof material. The material
+is a digest over the lane id, dataspace id, block height, verification
+height, block header hash, settlement hash, and manifest root. A relay is
+merge admissible only when it has both a QC and valid FastPQ proof
+material.
+
+### AXT Binding Math
+
+For Nexus AXT envelopes, `AxtFastpqBinding` is canonicalized before proof
+replay. Empty parameter values default to `fastpq-lane-balanced`; empty
+verifier id and version default to `fastpq` and `v1`; claim type is trimmed
+and lowercased.
+
+The AXT FastPQ public inputs are deterministic byte hashes:
+
+$$
+\operatorname{dsid}=\operatorname{dsid\_bytes}(\operatorname{source\_dsid})
+$$
+
+$$
+\operatorname{slot}=\operatorname{le64}(\operatorname{source\_tx\_commitment}[0..8])
+$$
+
+$$
+\operatorname{old\_root} =
+\operatorname{Hash}(
+\texttt{fastpq-json:old\_root}\|
+\operatorname{source\_tx\_commitment}\|
+\operatorname{policy\_commitment}\|
+\operatorname{effect\_type}
+)
+$$
+
+$$
+\operatorname{new\_root} =
+\operatorname{Hash}(
+\texttt{fastpq-json:new\_root}\|
+\operatorname{source\_tx\_commitment}\|
+\operatorname{claim\_digest}\|
+\operatorname{effect\_type}
+)
+$$
+
+$$
+\operatorname{perm\_root} =
+\operatorname{Hash}(
+\texttt{fastpq-json:perm\_root}\|
+\operatorname{policy\_commitment}\|
+\operatorname{verifier\_id}\|
+\operatorname{verifier\_version}
+)
+$$
+
+$$
+\operatorname{tx\_set\_hash} =
+\operatorname{Hash}(
+\texttt{fastpq-json:tx\_set\_hash}\|
+\operatorname{source\_tx\_commitment}\|
+\operatorname{claim\_digest}\|
+\operatorname{witness\_commitment}
+)
+$$
+
+AXT transition keys are:
+
+$$
+\operatorname{key}(\operatorname{prefix},x,y)=
+\operatorname{prefix}\|\texttt{/}\|x\|\texttt{/}\|y
+$$
+
+The `authorization` claim inserts a role-grant row:
+
+$$
+\operatorname{role\_id}=\operatorname{claim\_digest}
+$$
+
+$$
+\operatorname{permission\_id}=\operatorname{witness\_commitment}
+$$
+
+$$
+\operatorname{epoch}=
+\operatorname{le64}(\operatorname{policy\_commitment}[0..8])
+$$
+
+and a metadata row binding the authorization policy. The `compliance` claim
+inserts two metadata rows: one for policy and one for target dataspaces.
+
+For `tx_predicate` and `value_conservation`, an explicit effect amount is
+used when the binding contains a positive source or destination amount.
+Otherwise the code derives a bounded deterministic amount:
+
+$$
+\operatorname{bounded}(d,\min,\operatorname{span})
+=
+\min + (\operatorname{le64}(d[0..8])\bmod\max(\operatorname{span},1))
+$$
+
+Then the same transfer equations are used:
+
+$$
+\operatorname{sender\_after}=\operatorname{sender\_before}-a
+$$
+
+$$
+\operatorname{receiver\_after}=\operatorname{receiver\_before}+a
+$$
+
+The synthetic sender and receiver account ids are generated from key seeds:
+
+$$
+\operatorname{seed}=
+\operatorname{Hash}(\operatorname{label}\|\operatorname{entropy})[0..32]
+$$
+
+The transfer batch hash is:
+
+$$
+\operatorname{batch\_hash} =
+\operatorname{Hash}(
+\operatorname{label}\|
+\operatorname{corridor}\|
+\operatorname{source\_tx\_commitment}\|
+\operatorname{claim\_digest}
+)
+$$
+
+The AXT batch manifest digest is SHA-256 over the Norito encoding of the
+canonical binding:
+
+$$
+\operatorname{manifest\_digest} =
+\operatorname{SHA256}(E(\operatorname{canonical\_binding}))
+$$
+
+## SCCP Transparent Message Proofs
+
+The SCCP helper crate also uses FastPQ for transparent cross-chain message
+proofs. This path is separate from the `irohad` background prover lane. It
+builds a FastPQ batch directly from an SCCP message proof bundle and
+manifest, then wraps the resulting proof for open verification.
+
+The SCCP batch uses `fastpq-lane-balanced` and three metadata transitions:
+
+| Key | Operation |
+| ------------------------------- | --------- |
+| `sccp:transparent:v1:statement` | `MetaSet` |
+| `sccp:transparent:v1:context` | `MetaSet` |
+| `sccp:transparent:v1:payload` | `MetaSet` |
+
+Its public inputs are derived from the SCCP transparent inner proof:
+
+| FastPQ input | SCCP source |
+| ------------- | ---------------------------------------------------------- |
+| `dsid` | First 16 bytes of a Blake2b digest over the statement hash |
+| `slot` | Finality height |
+| `old_root` | Payload hash |
+| `new_root` | Commitment root |
+| `perm_root` | Finality block hash |
+| `tx_set_hash` | Statement hash |
+
+The SCCP canonical encoders write integers little-endian and encode
+variable-length byte arrays as:
+
+$$
+\operatorname{vec}(x)=\operatorname{le32}(|x|)\|x
+$$
+
+The transparent public input byte string is:
+
+$$
+P =
+\operatorname{version}\|
+\operatorname{message\_id}\|
+\operatorname{payload\_hash}\|
+\operatorname{le32}(\operatorname{target\_domain})\|
+\operatorname{commitment\_root}\|
+\operatorname{le64}(\operatorname{finality\_height})\|
+\operatorname{finality\_block\_hash}
+$$
+
+The transparent statement bytes are the concatenation of version, chain
+family, local and counterparty domains, security model, anchor governance,
+account codec, finality model, verifier target, verifier backend family,
+length-prefixed chain/backend/manifest fields, destination binding hash,
+account codec key, payload kind, public input bytes, and payload hash. The
+statement hash is:
+
+$$
+\operatorname{statement\_hash} =
+\operatorname{Blake2bVar}_{32}(
+\texttt{sccp:transparent:statement:v1}\|\operatorname{statement}
+)
+$$
+
+The FastPQ dataspace id for this proof path is the first sixteen bytes of
+another prefixed Blake2b digest:
+
+$$
+\operatorname{dsid} =
+\operatorname{Blake2bVar}_{32}(
+\texttt{sccp:transparent:fastpq:dsid:v1}\|\operatorname{statement\_hash}
+)[0..16]
+$$
+
+The SCCP FastPQ batch is exactly:
+
+$$
+(\texttt{sccp:transparent:v1:statement},\varnothing,\operatorname{statement},\operatorname{MetaSet})
+$$
+
+$$
+(\texttt{sccp:transparent:v1:context},\varnothing,E(\operatorname{inner\_proof}),\operatorname{MetaSet})
+$$
+
+$$
+(\texttt{sccp:transparent:v1:payload},\varnothing,\operatorname{canonical\_payload},\operatorname{MetaSet})
+$$
+
+then sorted by the same FastPQ ordering rule.
+
+The OpenVerify verifier commitment is SHA-256 over the SCCP message backend
+name and the canonical FastPQ verifier descriptor:
+
+$$
+\operatorname{vk\_hash} =
+\operatorname{SHA256}(
+\operatorname{message\_backend}\|\operatorname{verifier\_descriptor}
+)
+$$
+
+The raw FastPQ proof is Norito-encoded into a `StarkFriOpenProofV1`, then
+wrapped in an `OpenVerifyEnvelope` with backend `Stark`. SCCP verification
+rebuilds the same FastPQ batch from the bundle and manifest, checks the
+open verification envelope metadata, and calls the FastPQ verifier on the
+rebuilt batch and proof.
+
+## Parameter Sets
+
+The canonical parameter catalogue exposes two parameter sets. The host
+prover lane currently uses `fastpq-lane-balanced`.
+
+| Parameter | Purpose | Field | Hashes | FRI |
+| ---------------------- | -------------------------- | ------------------------------ | ------------------------------------------- | ------------------------------- |
+| `fastpq-lane-balanced` | balanced prover throughput | Goldilocks quadratic extension | Poseidon2 commitments, catalogue SHA3 label | arity 8, blowup 8, 46 queries |
+| `fastpq-lane-latency` | latency-sensitive lanes | Goldilocks quadratic extension | Poseidon2 commitments, catalogue SHA3 label | arity 16, blowup 16, 34 queries |
+
+Both target 128-bit security and use a trace domain size of `2^16`. The
+Rust V1 transcript replay code currently derives Fiat-Shamir challenge
+bytes with `iroha_crypto::Hash::new` rather than directly invoking
+SHA3-256.
+
+The exact catalogue constants used by the Rust prover are:
+
+| Constant | `fastpq-lane-balanced` | `fastpq-lane-latency` |
+| -------------------- | ---------------------: | --------------------: |
+| `target_security` | 128 | 128 |
+| `grinding_bits` | 23 | 21 |
+| `trace_log_size` | 16 | 16 |
+| `trace_root` | `0x002a247f81c6f850` | `0x6a9f4eb38fb9b892` |
+| `lde_log_size` | 19 | 20 |
+| `lde_root` | `0x60263388dbbf9b2a` | `0x9c9c3a571b6f89ac` |
+| `permutation_size` | 65,536 | 65,536 |
+| `lookup_log_size` | 19 | 20 |
+| `omega_coset` | `0x6af325e825ad5c18` | `0x3a5fd4171e3c3a4d` |
+| `fri_arity` | 8 | 16 |
+| `fri_blowup` | 8 | 16 |
+| `fri_max_reductions` | 8 | 6 |
+| `fri_queries` | 46 | 34 |
+
+## Configuration
+
+FastPQ configuration is nested under `zk.fastpq`.
+
+```toml
+[zk.fastpq]
+execution_mode = "auto"
+poseidon_mode = "auto"
+
+# Optional telemetry labels.
+device_class = "apple-m4"
+chip_family = "m4"
+gpu_kind = "integrated"
+
+# Optional Metal backend tuning.
+metal_queue_fanout = 3
+metal_queue_column_threshold = 24
+metal_max_in_flight = 5
+metal_threadgroup_width = 128
+metal_trace = false
+metal_debug_enum = false
+metal_debug_fused = false
+```
+
+The same execution and telemetry labels can be overridden from `irohad`:
+
+```shell
+irohad --fastpq-execution-mode auto
+irohad --fastpq-poseidon-mode cpu
+irohad --fastpq-device-class apple-m4
+irohad --fastpq-chip-family m4
+irohad --fastpq-gpu-kind integrated
+```
+
+Environment variables are also supported for the configuration fields. The
+FastPQ-specific variables include:
+
+- `FASTPQ_EXECUTION_MODE`
+- `FASTPQ_POSEIDON_MODE`
+- `FASTPQ_DEVICE_CLASS`
+- `FASTPQ_CHIP_FAMILY`
+- `FASTPQ_GPU_KIND`
+- `FASTPQ_METAL_QUEUE_FANOUT`
+- `FASTPQ_METAL_COLUMN_THRESHOLD`
+- `FASTPQ_METAL_MAX_IN_FLIGHT`
+- `FASTPQ_METAL_THREADGROUP`
+- `FASTPQ_METAL_TRACE`
+- `FASTPQ_DEBUG_METAL_ENUM`
+- `FASTPQ_DEBUG_FUSED`
+
+## Metrics
+
+When telemetry is enabled, FastPQ exports metrics for backend selection and
+Metal runtime behavior:
+
+| Metric | Meaning |
+| --------------------------------- | --------------------------------------------------------------------------- |
+| `fastpq_execution_mode_total` | Requested and resolved execution mode by backend and device labels |
+| `fastpq_poseidon_pipeline_total` | Requested and resolved Poseidon pipeline path |
+| `fastpq_metal_queue_depth` | Metal queue limit, max in-flight count, dispatch count, and sampling window |
+| `fastpq_metal_queue_ratio` | Metal queue busy and overlap ratios |
+| `fastpq_zero_fill_duration_ms` | Host zero-fill duration for Metal runs |
+| `fastpq_zero_fill_bandwidth_gbps` | Derived zero-fill bandwidth |
+
+For general performance triage, use these with the consensus and queue
+signals listed in [Performance and Metrics](/guide/advanced/metrics.md).
+
+## Related Reference
+
+- [Data Model Schema](/reference/data-model-schema.md) for generated type
+ details
+- `FastpqTransitionBatch`
+- `FastpqPublicInputs`
+- `TransferTranscript`
+- `AxtFastpqBinding`
+- `LaneFastpqProofMaterial`
+- [`irohad` FastPQ options](/reference/irohad-cli.md#arg-fastpq-execution-mode)
diff --git a/src/blockchain/filters.md b/src/blockchain/filters.md
new file mode 100644
index 000000000..bb00e8322
--- /dev/null
+++ b/src/blockchain/filters.md
@@ -0,0 +1,84 @@
+# Filters
+
+Filters narrow event streams and trigger conditions. The current top-level
+event filter is `EventFilterBox`, which can match these event families:
+
+- `Pipeline`
+- `Data`
+- `Time`
+- `ExecuteTrigger`
+- `TriggerCompleted`
+
+Use the narrowest filter that matches the workflow. Broad filters such as
+`DataEventFilter::Any` are useful for diagnostics, but they make every event
+pay the cost of trigger or subscriber matching.
+
+## Data Event Filters
+
+`DataEventFilter` matches ledger data events. Its current variants include:
+
+| Variant | Event family |
+| --- | --- |
+| `Any` | Any data event |
+| `Peer` | Peer lifecycle events |
+| `Domain` | Domain lifecycle and metadata events |
+| `Account` | Account lifecycle, metadata, alias, and identity events |
+| `Asset` | Asset balance and metadata events |
+| `AssetDefinition` | Asset definition lifecycle, policy, and metadata events |
+| `Nft` | NFT lifecycle and metadata events |
+| `Rwa` | Real-world-asset lifecycle events |
+| `Trigger` | Trigger lifecycle and metadata events |
+| `Role` | Role lifecycle events |
+| `Configuration` | On-chain configuration events |
+| `Executor` | Runtime executor events |
+| `Proof` | Proof verification lifecycle events |
+| `Confidential` | Confidential asset events |
+| `VerifyingKey` | Verifying-key registry events |
+| `RuntimeUpgrade` | Runtime upgrade events |
+| `Soradns` | Resolver directory governance events |
+| `Sorafs` | SoraFS gateway compliance events |
+| `SpaceDirectory` | Space Directory manifest lifecycle events |
+| `Escrow` | Native asset escrow lifecycle events |
+| `Offline` | Offline settlement events |
+| `Oracle` | Oracle feed events |
+| `Social` | Viral incentive events |
+| `Bridge` | Bridge events |
+| `Governance` | Governance events when the governance feature is enabled |
+
+Most concrete filters also allow an optional ID matcher and an event-set mask.
+For example, an asset filter can match one asset or one class of asset events,
+while a trigger filter can match a trigger ID and a trigger event set.
+
+## Pipeline Filters
+
+Pipeline filters match processing events such as block, transaction, merge,
+and witness events. Use them for operational subscriptions, block-processing
+dashboards, and triggers that react to pipeline state rather than ledger data
+objects.
+
+## Trigger Filters
+
+Triggers store their condition as an `EventFilterBox`. A trigger action also
+stores:
+
+- an executable
+- a repetition policy
+- an authority account
+- an optional time-trigger retry policy
+- metadata
+
+The trigger authority must have the permissions required by the executable.
+Prefer dedicated technical accounts for long-lived triggers.
+
+## Query Filters
+
+Query filters are separate from event filters. Iterable queries can expose
+predicate and selector support. Use query-specific typed filters from the SDK
+so the filter input matches the query output type.
+
+See also:
+
+- [Events](/blockchain/events.md)
+- [Triggers](/blockchain/triggers.md)
+- [Queries](/blockchain/queries.md)
+- [Query reference](/reference/queries.md)
diff --git a/src/blockchain/instructions.md b/src/blockchain/instructions.md
new file mode 100644
index 000000000..7b56d9446
--- /dev/null
+++ b/src/blockchain/instructions.md
@@ -0,0 +1,495 @@
+# Iroha Special Instructions
+
+When we spoke about [how Iroha operates](/blockchain/iroha-explained), we
+said that Iroha Special Instructions are the only way to modify the world
+state. So, what kind of special instructions do we have? If you've read the
+language-specific guides in this tutorial, you've already seen a couple of
+instructions: `Register` and `Mint`.
+
+Here is the full list of Iroha Special Instructions:
+
+| Instruction | Descriptions |
+| --------------------------------------------------------- | ------------------------------------------------ |
+| [Register/Unregister](#un-register) | Give an ID to a new entity on the blockchain. |
+| [Mint/Burn](#mint-burn) | Mint/burn numeric assets or trigger repetitions. |
+| [SetKeyValue/RemoveKeyValue](#setkeyvalue-removekeyvalue) | Update blockchain object metadata. |
+| [SetParameter](#setparameter) | Set a chain-wide parameter. |
+| [Grant/Revoke](#grant-revoke) | Give or remove permissions and roles. |
+| [Transfer](#transfer) | Transfer ownership or asset value. |
+| [ExecuteTrigger](#executetrigger) | Execute triggers. |
+| [Log/Custom/Upgrade](#other-instructions) | Log, extend, or upgrade runtime behavior. |
+
+Let's start with a summary of Iroha Special Instructions; what objects each
+instruction can be called for and what instructions are available for each
+object.
+
+## Summary
+
+For each instruction, there is a list of objects on which this instruction
+can be run on. For example, transfer variants cover ownable ledger objects
+and numeric assets, while minting covers numeric assets and trigger
+repetitions.
+
+Some instructions require a destination to be specified. For example, if
+you transfer assets, you always need to specify to which account you are
+transferring them. On the other hand, when you are registering something,
+all you need is the object that you want to register.
+
+| Instruction | Objects | Destination |
+| --------------------------------------------------------- | ------------------------------------------------------------------------------------------------------- | -------------------- |
+| [Register/Unregister](#un-register) | domains, accounts, asset definitions, NFTs, roles, triggers, peers | |
+| [Mint/Burn](#mint-burn) | numeric assets, trigger repetitions | accounts or triggers |
+| [SetKeyValue/RemoveKeyValue](#setkeyvalue-removekeyvalue) | objects that have [metadata](./metadata.md): domains, accounts, asset definitions, NFTs, RWAs, triggers | |
+| [SetParameter](#setparameter) | chain parameters | |
+| [Grant/Revoke](#grant-revoke) | [roles, permission tokens](/blockchain/permissions.md) | accounts or roles |
+| [Transfer](#transfer) | domains, asset definitions, numeric assets, NFTs | accounts |
+| [ExecuteTrigger](#executetrigger) | triggers | |
+| [Log/Custom/Upgrade](#other-instructions) | logs, executor-specific payloads, executor upgrades | |
+
+There is also another way of looking at ISI, in terms of the ledger object
+they touch:
+
+| Target | Instructions |
+| ---------------- | ------------------------------------------------------------------------------------------------------------ |
+| Account | register/unregister accounts, receive assets, update account metadata, grant/revoke permissions and roles |
+| Domain | register/unregister domains, transfer domain ownership, update domain metadata |
+| Asset definition | register/unregister definitions, transfer ownership, update metadata |
+| Asset | mint/burn numeric quantity, transfer numeric quantity |
+| NFT | register/unregister NFTs, transfer ownership, update metadata |
+| RWA | register lots, transfer quantity, hold/release, freeze/unfreeze, redeem, merge, update metadata and controls |
+| Trigger | register/unregister, mint/burn trigger repetitions, execute trigger, update trigger metadata |
+| World | register/unregister peers and roles, set parameters, upgrade the executor |
+
+## CLI Examples
+
+The examples in this page assume you are running commands from the upstream
+Iroha workspace against the default local client configuration:
+
+```bash
+cargo run --bin iroha -- --config ./defaults/client.toml
+```
+
+If you installed the `iroha` binary, use
+`iroha --config ./defaults/client.toml` instead. Replace the placeholders
+below with values from your network:
+
+```bash
+export ALICE=""
+export BOB=""
+export ASSET_DEF=""
+export PEER_KEY=""
+export PEER_POP=""
+```
+
+When targeting the public Taira testnet, use a Taira client configuration.
+Before running fee-paying examples, save the faucet helper from
+[Get Testnet XOR on Taira](/get-started/sora-nexus-dataspaces.md#_4-get-testnet-xor-on-taira)
+as `taira_faucet_claim.py`, then claim testnet XOR from the faucet:
+
+```bash
+export TAIRA_ACCOUNT_ID=""
+export TAIRA_FEE_ASSET="6TEAJqbb8oEPmLncoNiMRbLEK6tw"
+
+curl -fsS https://taira.sora.org/v1/accounts/faucet/puzzle | jq .
+python3 taira_faucet_claim.py "$TAIRA_ACCOUNT_ID"
+
+iroha --config ./taira.client.toml ledger asset get \
+ --definition "$TAIRA_FEE_ASSET" \
+ --account "$TAIRA_ACCOUNT_ID"
+```
+
+After the faucet-funded asset is visible, attach the required gas asset
+metadata to write transactions:
+
+```bash
+printf '{"gas_asset_id":"%s"}\n' "$TAIRA_FEE_ASSET" > taira.tx-metadata.json
+
+cargo run --bin iroha -- \
+ --config ./taira.client.toml \
+ --metadata ./taira.tx-metadata.json \
+
+```
+
+## (Un)Register
+
+Registering and unregistering are the instructions used to give an ID to a
+new entity on the blockchain.
+
+Everything that can be registered is both `Registrable` and `Identifiable`,
+but not everything that's `Identifiable` is `Registrable`. Most things are
+registered directly, but in some cases the representation in the blockchain
+has considerably more data. For security and performance reasons, we use
+builders for such data structures (e.g. `NewAccount`), and peer
+registration has a dedicated proof-of-possession instruction. As a rule,
+everything that can be registered can also be unregistered, but that is not
+a hard and fast rule.
+
+You can register domains, accounts, asset definitions, NFTs, peers, roles,
+and triggers. Peer registration uses `RegisterPeerWithPop`, which carries a
+proof of possession for the peer key. Check our
+[naming conventions](/reference/naming.md) to learn about the restrictions
+put on entity names.
+
+RWA lots are created through the dedicated `RegisterRwa` instruction. The
+current code does not expose an `UnregisterRwa` instruction; use
+`RedeemRwa` to retire represented quantity.
+
+::: info
+
+Note that depending on how you decide to set up your
+[genesis block](/guide/configure/genesis.md) in `genesis.json`
+(specifically, whether or not you include registration of permission
+tokens), the process for registering an account can be very different. In
+general, we can summarise it like this:
+
+- In a _public_ blockchain, anyone should be able to register an account.
+- In a _private_ blockchain, there can be a unique process for registering
+ accounts. In a _typical_ private blockchain, i.e. a blockchain without
+ any unique processes for registering accounts, you need an account to
+ register another account.
+
+We discuss these differences in great detail when we
+[compare private and public blockchains](/guide/configure/modes.md).
+
+:::
+
+::: info
+
+Registering a peer is currently the only way of adding peers that were not
+part of the original trusted peer set to the network.
+
+:::
+
+Refer to one of the language-specific guides to walk you through the
+process of registering objects in a blockchain:
+
+| Language | Guide |
+| --------------------- | ------------------------------------------------------------------------------------------------------- |
+| CLI | Use the [Iroha CLI](/get-started/operate-iroha-2-via-cli.md) to register domains, accounts, and assets. |
+| Rust | Use the [Rust tutorial](/guide/tutorials/rust.md). |
+| Kotlin/Java | Use the [Kotlin/Java tutorial](/guide/tutorials/kotlin-java.md). |
+| Python | Use the [Python tutorial](/guide/tutorials/python.md). |
+| JavaScript/TypeScript | Use the [JavaScript/TypeScript tutorial](/guide/tutorials/javascript.md). |
+
+Register and unregister domains:
+
+```bash
+cargo run --bin iroha -- --config ./defaults/client.toml \
+ ledger domain register --id docs.universal
+
+cargo run --bin iroha -- --config ./defaults/client.toml \
+ ledger domain unregister --id docs.universal
+```
+
+Register and unregister accounts:
+
+```bash
+cargo run --bin iroha -- --config ./defaults/client.toml \
+ ledger account register --id "$BOB"
+
+cargo run --bin iroha -- --config ./defaults/client.toml \
+ ledger account unregister --id "$BOB"
+```
+
+Register and unregister asset definitions:
+
+```bash
+cargo run --bin iroha -- --config ./defaults/client.toml \
+ ledger asset definition register \
+ --id "$ASSET_DEF" \
+ --name docs_token \
+ --alias docs_token#docs.universal \
+ --scale 0
+
+cargo run --bin iroha -- --config ./defaults/client.toml \
+ ledger asset definition unregister --id "$ASSET_DEF"
+```
+
+Register and unregister NFTs. NFT registration reads its content JSON from
+standard input:
+
+```bash
+printf '{"kind":"badge","level":"intro"}\n' |
+ cargo run --bin iroha -- --config ./defaults/client.toml \
+ ledger nft register --id 'badge$docs.universal'
+
+cargo run --bin iroha -- --config ./defaults/client.toml \
+ ledger nft unregister --id 'badge$docs.universal'
+```
+
+Register and unregister roles:
+
+```bash
+cargo run --bin iroha -- --config ./defaults/client.toml \
+ ledger role register --id operators
+
+cargo run --bin iroha -- --config ./defaults/client.toml \
+ ledger role unregister --id operators
+```
+
+Register and unregister triggers. Trigger registration needs either
+compiled IVM bytecode or a serialized instruction list. This example builds
+a `Log` instruction with the CLI and pipes it into trigger registration:
+
+```bash
+cargo run --bin iroha -- --config ./defaults/client.toml -o \
+ ledger transaction ping --log-level INFO --msg "hourly cleanup" |
+cargo run --bin iroha -- --config ./defaults/client.toml \
+ ledger trigger register --id hourly_cleanup \
+ --instructions-stdin \
+ --filter time \
+ --time-start 5m \
+ --time-period-ms 3600000
+
+cargo run --bin iroha -- --config ./defaults/client.toml \
+ ledger trigger unregister --id hourly_cleanup
+```
+
+Register and unregister peers. Generate the BLS key and PoP with `kagami`
+if you do not already have them:
+
+```bash
+cargo run --bin kagami -- keys --algorithm bls_normal --pop --json
+
+cargo run --bin iroha -- --config ./defaults/client.toml \
+ ledger peer register --key "$PEER_KEY" --pop "$PEER_POP"
+
+cargo run --bin iroha -- --config ./defaults/client.toml \
+ ledger peer unregister --key "$PEER_KEY"
+```
+
+## Mint/Burn
+
+Minting and burning can refer to numeric assets and triggers with a limited
+number of repetitions. Some assets can be declared as non-mintable, meaning
+that they can be minted only once after registration.
+
+Assets are minted to a specific account, usually the one that registered
+the asset in the first place. Asset quantities are non-negative, so you can
+never have `$-1.0` of an asset or burn a negative amount and get a mint.
+
+Refer to one of the language-specific guides to walk you through the
+process of minting assets in a blockchain:
+
+- [CLI](/get-started/operate-iroha-2-via-cli.md)
+- [Rust](/guide/tutorials/rust.md)
+- [Kotlin/Java](/guide/tutorials/kotlin-java.md)
+- [Python](/guide/tutorials/python.md)
+- [JavaScript/TypeScript](/guide/tutorials/javascript.md)
+
+Here are examples of burning assets:
+
+- [CLI](/get-started/operate-iroha-2-via-cli.md)
+- [Rust](/guide/tutorials/rust.md)
+
+Mint and burn numeric assets:
+
+```bash
+cargo run --bin iroha -- --config ./defaults/client.toml \
+ ledger asset mint \
+ --definition "$ASSET_DEF" \
+ --account "$ALICE" \
+ --quantity 100
+
+cargo run --bin iroha -- --config ./defaults/client.toml \
+ ledger asset burn \
+ --definition "$ASSET_DEF" \
+ --account "$ALICE" \
+ --quantity 10
+```
+
+Mint and burn trigger repetitions:
+
+```bash
+cargo run --bin iroha -- --config ./defaults/client.toml \
+ ledger trigger mint --id hourly_cleanup --repetitions 5
+
+cargo run --bin iroha -- --config ./defaults/client.toml \
+ ledger trigger burn --id hourly_cleanup --repetitions 1
+```
+
+## Transfer
+
+Transfers move ownership or value between accounts. Generic transfer
+variants cover domains, asset definitions, numeric assets, and NFTs. RWA
+quantity movement uses the dedicated `TransferRwa` and `ForceTransferRwa`
+instructions described in [Real-World Assets](/blockchain/rwas.md).
+
+To do this, an account have to be granted the
+[permission to transfer assets](/reference/permissions.md). Refer to an
+example on how to transfer assets with
+[CLI](/get-started/operate-iroha-2-via-cli.md) or
+[Rust](/guide/tutorials/rust.md).
+
+Transfer numeric assets:
+
+```bash
+cargo run --bin iroha -- --config ./defaults/client.toml \
+ ledger asset transfer \
+ --definition "$ASSET_DEF" \
+ --account "$ALICE" \
+ --to "$BOB" \
+ --quantity 25
+```
+
+Transfer domain, asset-definition, and NFT ownership:
+
+```bash
+cargo run --bin iroha -- --config ./defaults/client.toml \
+ ledger domain transfer --id docs.universal --from "$ALICE" --to "$BOB"
+
+cargo run --bin iroha -- --config ./defaults/client.toml \
+ ledger asset definition transfer --id "$ASSET_DEF" --from "$ALICE" --to "$BOB"
+
+cargo run --bin iroha -- --config ./defaults/client.toml \
+ ledger nft transfer --id 'badge$docs.universal' --from "$ALICE" --to "$BOB"
+```
+
+## Grant/Revoke
+
+Grant and revoke instructions are used for account
+[permissions and roles](permissions.md).
+
+`Grant` is used to permanently grant a user either a single permission, or
+a group of permissions (a "role"). Granted roles and permissions can only
+be removed via the `Revoke` instruction. As such, these instructions should
+be used carefully.
+
+Grant and revoke a role on an account:
+
+```bash
+cargo run --bin iroha -- --config ./defaults/client.toml \
+ ledger account role grant --id "$BOB" --role operators
+
+cargo run --bin iroha -- --config ./defaults/client.toml \
+ ledger account role revoke --id "$BOB" --role operators
+```
+
+Grant and revoke permission tokens. Permission commands read a permission
+object from standard input:
+
+```bash
+printf '{"name":"CanSetParameters","payload":null}\n' |
+ cargo run --bin iroha -- --config ./defaults/client.toml \
+ ledger account permission grant --id "$BOB"
+
+printf '{"name":"CanSetParameters","payload":null}\n' |
+ cargo run --bin iroha -- --config ./defaults/client.toml \
+ ledger account permission revoke --id "$BOB"
+```
+
+Grant and revoke permissions on a role:
+
+```bash
+printf '{"name":"CanRegisterDomain","payload":null}\n' |
+ cargo run --bin iroha -- --config ./defaults/client.toml \
+ ledger role permission grant --id operators
+
+printf '{"name":"CanRegisterDomain","payload":null}\n' |
+ cargo run --bin iroha -- --config ./defaults/client.toml \
+ ledger role permission revoke --id operators
+```
+
+## `SetKeyValue`/`RemoveKeyValue`
+
+These instructions update object [metadata](/blockchain/metadata.md). Use
+`SetKeyValue` to insert or replace a metadata entry and `RemoveKeyValue` to
+delete one.
+
+Metadata `set` commands read the JSON value from standard input:
+
+```bash
+printf '"production"\n' |
+ cargo run --bin iroha -- --config ./defaults/client.toml \
+ ledger domain meta set --id docs.universal --key environment
+
+cargo run --bin iroha -- --config ./defaults/client.toml \
+ ledger domain meta remove --id docs.universal --key environment
+```
+
+The same pattern is available for accounts, asset definitions, NFTs, RWAs,
+and triggers:
+
+```bash
+printf '{"display_name":"Alice"}\n' |
+ cargo run --bin iroha -- --config ./defaults/client.toml \
+ ledger account meta set --id "$ALICE" --key profile
+
+printf '{"issuer":"docs"}\n' |
+ cargo run --bin iroha -- --config ./defaults/client.toml \
+ ledger asset definition meta set --id "$ASSET_DEF" --key issuer
+
+printf '{"color":"blue"}\n' |
+ cargo run --bin iroha -- --config ./defaults/client.toml \
+ ledger nft meta set --id 'badge$docs.universal' --key traits
+
+printf '{"owner":"ops"}\n' |
+ cargo run --bin iroha -- --config ./defaults/client.toml \
+ ledger trigger meta set --id hourly_cleanup --key owner
+```
+
+## `SetParameter`
+
+`SetParameter` changes chain-wide parameters exposed by the active data
+model and executor.
+
+Set a parameter by passing a single parameter JSON object on standard
+input:
+
+```bash
+printf '{"Sumeragi":{"BlockTimeMs":1000}}\n' |
+ cargo run --bin iroha -- --config ./defaults/client.toml \
+ ledger parameter set
+```
+
+## `ExecuteTrigger`
+
+This instruction is used to execute [triggers](./triggers.md).
+
+The CLI can register triggers and subscribe to trigger execution events
+directly. It does not provide a typed `execute trigger` command, so to
+submit a manual `ExecuteTrigger` instruction, generate a serialized
+`InstructionBox` with an SDK or executor tool and pass the resulting JSON
+array through `ledger transaction stdin`:
+
+```bash
+printf '[""]\n' |
+ cargo run --bin iroha -- --config ./defaults/client.toml \
+ ledger transaction stdin
+
+cargo run --bin iroha -- --config ./defaults/client.toml \
+ ledger events trigger-execute --timeout 30s
+```
+
+## Other instructions
+
+Iroha also exposes lower-level instructions for runtime and executor
+integration:
+
+- `Log`: emit a log entry during execution
+- `CustomInstruction`: carry executor-specific JSON payloads
+- `Upgrade`: activate an executor upgrade
+
+Submit a `Log` instruction with the ping helper:
+
+```bash
+cargo run --bin iroha -- --config ./defaults/client.toml \
+ ledger transaction ping --log-level INFO --msg "hello from docs"
+```
+
+Submit a custom executor instruction as a serialized `InstructionBox`. The
+payload shape is executor-specific, so generate the instruction with the
+matching SDK or executor tooling:
+
+```bash
+printf '[""]\n' |
+ cargo run --bin iroha -- --config ./defaults/client.toml \
+ ledger transaction stdin
+```
+
+Upgrade the executor from a compiled IVM bytecode file:
+
+```bash
+cargo run --bin iroha -- --config ./defaults/client.toml \
+ ops executor upgrade --path ./target/ivm/executor.ivm
+```
diff --git a/src/blockchain/iroha-explained.md b/src/blockchain/iroha-explained.md
new file mode 100644
index 000000000..c4aed0390
--- /dev/null
+++ b/src/blockchain/iroha-explained.md
@@ -0,0 +1,115 @@
+# Iroha Explained
+
+Iroha 3 is the Nexus-oriented track of the Hyperledger Iroha workspace. It
+shares the same core components as Iroha 2 but adds the Nexus execution
+model for data spaces and multi-lane routing.
+
+## Core Building Blocks
+
+- **`irohad`** runs peers
+- **Torii** is the client and operator gateway
+- **Sumeragi** handles consensus
+- **Norito** is the [canonical binary format](/reference/norito.md)
+- **IVM** runs portable smart contracts and bytecode
+- **Kagami** prepares keys, genesis, profiles, and localnets
+- **SORA Nexus service planes** add Soracloud, Inrou, SoraNet, SoraFS, and
+ SoraDNS for app hosting, privacy transport, storage, and naming
+
+## Execution Model
+
+Every change to world state still happens through transactions.
+Transactions carry instructions or IVM bytecode, and Torii is the main way
+clients submit them or observe their effects.
+
+What changes in Iroha 3 is the deployment shape:
+
+- Nexus-aware configurations can define multiple lanes
+- data spaces isolate workloads while staying part of the same ledger model
+- routing policy decides which lane and dataspace handle a class of work
+
+## Multi-Dataspace Architecture
+
+A dataspace is a routing and namespace boundary, not a separate blockchain.
+The runtime still has one `World`, one transaction model, and one consensus
+pipeline. Nexus adds catalogs that tell the node how to partition work
+across lanes and how to name the dataspaces those lanes serve.
+
+At runtime, a dataspace is represented by a numeric `DataSpaceId` and
+catalog metadata. `DataSpaceId::UNIVERSAL` is reserved as `0`; the default
+catalog contains the `universal` dataspace. Each configured dataspace has:
+
+- a unique numeric ID
+- a unique alias such as `universal`, `governance`, or `zk`
+- an optional description for operator surfaces
+- a non-zero `fault_tolerance` value used to size relay committees
+
+Lanes are the execution and storage routes bound to those dataspaces. A
+lane entry carries a `LaneId`, the `DataSpaceId` it serves, an alias,
+visibility (`public` or `restricted`), storage profile (`full_replica`,
+`commitment_only`, or `split_replica`), proof scheme, and optional
+governance, settlement, and scheduler metadata. The runtime derives
+per-lane storage geometry from this catalog, including Kura segment names
+and deterministic key prefixes.
+
+The routing path is:
+
+1. Configuration builds a validated `DataSpaceCatalog`, `LaneCatalog`, and
+ `LaneRoutingPolicy`. Multiple lanes, multiple dataspaces, or non-default
+ routing require `nexus.enabled = true`.
+2. The transaction queue asks the active lane router for a
+ `RoutingDecision` containing a lane ID and dataspace ID.
+3. Explicit routing rules can match by authority/account or by instruction
+ label. Without a matching rule, the router can derive the dataspace from
+ domain IDs, asset-definition projections, dataspace-scoped permissions,
+ settlement legs, or the authority's bound account scope.
+4. The resolved route is checked against both catalogs. Unknown lanes,
+ unknown dataspaces, and lane/dataspace mismatches are deterministic
+ routing errors. If a transaction writes to two different dataspace
+ targets, it is rejected as a conflicting route; cross-dataspace DVP/PVP
+ settlement is routed through the universal coordinator lane.
+5. Sumeragi and telemetry keep the assignment visible as lane and dataspace
+ activity, backlog, and commitment snapshots.
+
+This is why object identifiers matter. Domains include the dataspace alias
+in their ID, for example `payments.universal`, so domain-scoped writes can
+be routed. Accounts remain canonical and domainless, so the same account
+can be bound into different application scopes without changing its
+`AccountId`. Asset definitions can carry a domain/dataspace projection,
+which lets asset operations inherit the correct dataspace route.
+
+Without Nexus overrides, the node uses a single lane and the `universal`
+dataspace. The bundled SORA profile replaces that with a three-lane
+catalog: `core` for the universal public lane, `governance` for governance
+traffic, and `zk` for zero-knowledge attachment and contract-deployment
+traffic.
+
+Those three defaults exist to separate workload classes:
+
+| Dataspace | Lane | Why it exists |
+| ------------ | ------------ | --------------------------------------------------------------------------------------------------------------------------------------------------- |
+| `universal` | `core` | Reserved default dataspace (`DataSpaceId::UNIVERSAL == 0`) for ordinary public ledger traffic and fallback routing. |
+| `governance` | `governance` | Restricted lane for governance and parliament traffic, so control-plane activity is not mixed with general application writes. |
+| `zk` | `zk` | Restricted lane for zero-knowledge proofs, attachments, and contract deployment routing, keeping proof-heavy workflows separate from normal writes. |
+
+Only `universal` is the reserved baseline. `governance` and `zk` are SORA
+profile choices encoded in the bundled catalog and routing policy;
+operators can define a different catalog when they need different dataspace
+boundaries.
+
+## What Operators Notice First
+
+Compared with the older single-lane documentation set, operators will
+notice these changes most quickly:
+
+- richer status and telemetry endpoints
+- explicit genesis `consensus_mode` and PoP-aware topology
+- SORA Nexus profiles under `defaults/nexus/`
+- more CLI coverage for consensus and operator diagnostics
+
+## Read Next
+
+- [SORA Nexus services](/blockchain/sora-nexus-services.md)
+- [Launch Iroha 3](/get-started/launch-iroha-2.md)
+- [World, WSV, and Kura storage](/blockchain/world.md)
+- [Genesis reference](/reference/genesis.md)
+- [Torii endpoints](/reference/torii-endpoints.md)
diff --git a/src/blockchain/metadata.md b/src/blockchain/metadata.md
new file mode 100644
index 000000000..24bb7e6cb
--- /dev/null
+++ b/src/blockchain/metadata.md
@@ -0,0 +1,106 @@
+# Metadata
+
+Metadata is a checked key-value map attached to ledger objects. Keys are
+`Name` values and values are JSON (`Json`) payloads.
+
+The following objects can carry metadata:
+
+- domains
+- accounts
+- assets
+- asset definitions
+- NFTs
+- RWAs
+- triggers
+- transactions
+
+Use metadata for small descriptive or indexing fields that belong in ledger
+state. Large payloads should be stored outside the WSV and referenced by a
+digest, URI, or SoraFS path.
+
+For guidance on choosing metadata, assets, NFTs, RWAs, or off-chain
+storage, see
+[Metadata and Ledger Storage Choices](/guide/configure/metadata-and-store-assets.md).
+
+## Try It on Taira
+
+Metadata is visible through normal resource reads. This command lists Taira
+asset definitions that currently have metadata:
+
+```bash
+curl -fsS 'https://taira.sora.org/v1/assets/definitions?limit=100' \
+ | jq '.items[]
+ | select((.metadata | length) > 0)
+ | {id, name, metadata}'
+```
+
+Use the same pattern for domains and accounts:
+
+```bash
+curl -fsS 'https://taira.sora.org/v1/domains?limit=20' \
+ | jq '.items[] | select((.metadata // {} | length) > 0)'
+
+curl -fsS 'https://taira.sora.org/v1/accounts?limit=20' \
+ | jq '.items[] | select((.metadata // {} | length) > 0)'
+```
+
+Treat empty output as a valid result. It means the current page of Taira
+objects does not carry metadata, not that the endpoint failed.
+
+## Updating Metadata
+
+Metadata is changed with Iroha Special Instructions:
+
+- [`SetKeyValue`](/blockchain/instructions.md#setkeyvalue-removekeyvalue)
+ inserts or replaces a key
+- [`RemoveKeyValue`](/blockchain/instructions.md#setkeyvalue-removekeyvalue)
+ removes a key
+
+The authority submitting the transaction must have the permission required
+by the active runtime validator. For the default permission surface, see
+[Permission Tokens](/reference/permissions.md).
+
+## Events
+
+Data events are emitted when metadata changes. The generic event payload is
+`MetadataChanged`:
+
+```mermaid
+classDiagram
+
+class MetadataChanged~Id~ {
+ target: Id
+ key: Name
+ value: Json
+}
+
+class AccountMetadataChanged
+class AssetMetadataChanged
+class AssetDefinitionMetadataChanged
+class DomainMetadataChanged
+
+MetadataChanged --> AccountMetadataChanged
+MetadataChanged --> AssetMetadataChanged
+MetadataChanged --> AssetDefinitionMetadataChanged
+MetadataChanged --> DomainMetadataChanged
+```
+
+Use [data event filters](/blockchain/filters.md#data-event-filters) to
+subscribe only to metadata events for the entity type or object ID that
+matters to an integration.
+
+## Queries
+
+Metadata is returned as part of the queried object. For example, use
+[`FindAccountById`](/reference/queries.md#accounts-and-permissions),
+[`FindDomainById`](/reference/queries.md#domains-and-peers), or
+[`FindAssetDefinitionById`](/reference/queries.md#assets-nfts-and-rwas).
+Use [`FindNfts`](/reference/queries.md#assets-nfts-and-rwas) or
+[`FindNftsByAccountId`](/reference/queries.md#assets-nfts-and-rwas) for
+NFTs, and [`FindRwas`](/reference/queries.md#assets-nfts-and-rwas) for RWA
+lots. Then read the object's metadata field. NFT query responses expose the
+NFT `content` map as the record metadata.
+
+Metadata keys are part of the ledger state, so keep them stable and avoid
+encoding application-specific version churn into the key name when a JSON
+value can carry that version explicitly.
diff --git a/src/blockchain/nfts.md b/src/blockchain/nfts.md
new file mode 100644
index 000000000..45fb80d01
--- /dev/null
+++ b/src/blockchain/nfts.md
@@ -0,0 +1,195 @@
+# NFTs
+
+An Iroha NFT is a unique ledger object with one owner. Use NFTs when a
+record needs its own identity, metadata, lifecycle events, and ownership
+transfer semantics, but does not need a numeric balance.
+
+Unlike a numeric [asset](/blockchain/assets.md), an NFT does not have
+precision, mintability, or per-account quantities. The NFT exists as one
+registered object, and ownership is tracked directly on that object.
+
+## Structure
+
+A registered `Nft` contains:
+
+- `id`: an `NftId`
+- `content`: metadata that describes the NFT
+- `owned_by`: the account that owns the NFT
+
+The `content` field is a `Metadata` map. Keep it compact: store descriptive
+fields, stable references, hashes, URIs, or SoraFS paths there. Store large
+documents, media, or high-churn application state off-chain and keep only a
+verifiable reference on the NFT.
+
+## Try It on Taira
+
+Check whether the public Taira testnet currently has NFT records:
+
+```bash
+curl -fsS 'https://taira.sora.org/v1/nfts?limit=5' \
+ | jq '{total, nft_ids: [.items[].id]}'
+```
+
+Check the live OpenAPI document for NFT routes exposed by the node:
+
+```bash
+curl -fsS https://taira.sora.org/openapi.json \
+ | jq -r '.paths | keys[] | select(startswith("/v1/nfts") or startswith("/v1/explorer/nfts"))'
+```
+
+An empty `items` array is a valid response on a public testnet. It means there
+are no NFTs in the current page, not that NFT instructions are unavailable.
+
+## NFT IDs
+
+`NftId` uses this text form:
+
+```text
+name$domain
+name$domain.dataspace
+```
+
+For example, `badge$docs.universal` identifies the `badge` NFT in the
+`docs.universal` domain. If the dataspace is omitted, the current parser
+uses the `universal` dataspace, so `badge$docs` resolves to
+`badge$docs.universal`.
+
+Use stable names for NFT IDs. The ID is the object identity used by
+instructions, queries, permissions, event filters, and application
+references.
+
+## Lifecycle
+
+NFT lifecycle operations use Iroha Special Instructions:
+
+- [`Register`](/blockchain/instructions.md#un-register) creates the NFT
+ with initial `content`.
+- [`Unregister`](/blockchain/instructions.md#un-register) removes the NFT.
+- [`Transfer`](/blockchain/instructions.md#transfer) changes `owned_by`.
+- [`SetKeyValue` and `RemoveKeyValue`](/blockchain/instructions.md#setkeyvalue-removekeyvalue)
+ update NFT metadata.
+
+## Try It Locally
+
+These examples assume you have launched a local network and have the
+generated client configuration from the
+[CLI guide](/get-started/operate-iroha-2-via-cli.md):
+
+```bash
+export IROHA_CONFIG=./localnet/client.toml
+export NFT_DOMAIN=nft_demo.universal
+export NFT_ID='badge_intro$nft_demo.universal'
+```
+
+Register a domain for the example. If it already exists, skip this command
+or choose a different `NFT_DOMAIN`.
+
+```bash
+cargo run --bin iroha -- --config "$IROHA_CONFIG" \
+ ledger domain register --id "$NFT_DOMAIN"
+```
+
+Register an NFT. Registration reads the initial content JSON from standard
+input:
+
+```bash
+printf '{"kind":"badge","level":"intro","issuer":"docs"}\n' |
+ cargo run --bin iroha -- --config "$IROHA_CONFIG" \
+ ledger nft register --id "$NFT_ID"
+```
+
+Inspect the NFT directly and then list all NFTs with full entries:
+
+```bash
+cargo run --bin iroha -- --config "$IROHA_CONFIG" \
+ ledger nft get --id "$NFT_ID"
+
+cargo run --bin iroha -- --config "$IROHA_CONFIG" \
+ ledger nft list all --verbose
+```
+
+Add a metadata key and read the NFT again:
+
+```bash
+printf '{"color":"blue","rarity":"tutorial"}\n' |
+ cargo run --bin iroha -- --config "$IROHA_CONFIG" \
+ ledger nft meta set --id "$NFT_ID" --key traits
+
+cargo run --bin iroha -- --config "$IROHA_CONFIG" \
+ ledger nft get --id "$NFT_ID"
+```
+
+Remove the metadata key:
+
+```bash
+cargo run --bin iroha -- --config "$IROHA_CONFIG" \
+ ledger nft meta remove --id "$NFT_ID" --key traits
+```
+
+Optionally transfer the NFT. Use `ledger nft get` to read the current owner
+from `owned_by`, and use `ledger account list all` to find a destination
+account ID.
+
+```bash
+cargo run --bin iroha -- --config "$IROHA_CONFIG" \
+ ledger account list all
+
+export CURRENT_OWNER=''
+export NEW_OWNER=''
+
+cargo run --bin iroha -- --config "$IROHA_CONFIG" \
+ ledger nft transfer --id "$NFT_ID" --from "$CURRENT_OWNER" --to "$NEW_OWNER"
+```
+
+Clean up when you are done. If you transferred the NFT, run this command
+with the current owner's account configuration or transfer the NFT back
+first.
+
+```bash
+cargo run --bin iroha -- --config "$IROHA_CONFIG" \
+ ledger nft unregister --id "$NFT_ID"
+```
+
+## Queries and Events
+
+Use [`FindNfts`](/reference/queries.md#assets-nfts-and-rwas) to list NFTs
+and [`FindNftsByAccountId`](/reference/queries.md#assets-nfts-and-rwas) to
+list NFTs owned by an account.
+
+NFT registration, deletion, transfer, and metadata updates emit NFT data
+events. Use the `Nft` data event filter when subscribing to ledger changes
+or building triggers that react to NFT lifecycle events.
+
+## Permissions
+
+The default permission surface includes NFT-specific tokens:
+
+- `CanRegisterNft`
+- `CanUnregisterNft`
+- `CanTransferNft`
+- `CanModifyNftMetadata`
+
+Permission checks are enforced by the active runtime validator, so a
+network can customize authorization by upgrading the executor. See
+[Permission Tokens](/reference/permissions.md) for the current default
+token list.
+
+## Choosing NFTs
+
+Use an NFT for records where uniqueness and ownership matter:
+
+- certificates, badges, licenses, and attestations
+- membership or access records
+- identity-bound or account-owned application records
+- references to off-chain media, documents, or manifests
+
+Use a numeric asset for fungible balances, and use plain
+[metadata](/blockchain/metadata.md) when the data is only a compact
+attribute of an existing ledger object.
+
+See also:
+
+- [Assets](/blockchain/assets.md)
+- [Metadata](/blockchain/metadata.md)
+- [Instructions](/blockchain/instructions.md)
+- [Queries](/blockchain/queries.md)
diff --git a/src/blockchain/permissions.md b/src/blockchain/permissions.md
new file mode 100644
index 000000000..cfb6f77e0
--- /dev/null
+++ b/src/blockchain/permissions.md
@@ -0,0 +1,134 @@
+# Permissions
+
+Accounts need permission tokens for various actions on a blockchain, e.g.
+to mint or burn assets.
+
+There is a difference between a public and a private blockchain in terms of
+permissions granted to users. In a public blockchain, most accounts have
+the same set of permissions. In a private blockchain, most accounts are
+assumed not to be able to do anything outside the authority granted to them
+unless explicitly granted the relevant permission.
+
+Having a permission to do something means having a `PermissionToken` to do
+so. There are two ways for users to receive permission tokens: they can be
+granted directly or as a part of a [`Role`](#permission-groups-roles) (a
+set of permission tokens). Permissions are granted via `Grant` special
+instruction. Permission tokens and roles do not expire, they can only be
+removed using `Revoke` instruction.
+
+## Permission Tokens
+
+Permission tokens are typed objects defined by the active executor. Some
+tokens are global, such as `CanManagePeers`, and others are scoped to a
+specific ledger object, such as an account, asset, asset definition, domain,
+NFT, role, or trigger.
+
+Here are some examples of parameters used for various permission tokens:
+
+- A token that grants permission to modify metadata for a specific account
+ carries an `account` field:
+
+ ```json
+ {
+ "account": ""
+ }
+ ```
+
+- A token that grants permission to transfer assets for a specific asset
+ definition carries an `asset_definition` field:
+
+ ```json
+ {
+ "asset_definition": ""
+ }
+ ```
+
+- A global token such as `CanManagePeers` has no fields:
+
+ ```json
+ {}
+ ```
+
+### Pre-configured Permission Tokens
+
+You can find the list of pre-configured permission tokens in the [Reference](/reference/permissions) chapter.
+
+## Permission Groups (Roles)
+
+A set of permissions is called a **role**. Similarly to permission tokens,
+roles can be granted using the `Grant` instruction and revoked using the
+`Revoke` instruction.
+
+Before granting a role to an account, the role should be registered first.
+
+Roles are useful when several accounts should receive the same permission
+set. Register the role once, grant permissions to the role, and then grant or
+revoke the role for individual accounts.
+
+### Register a new role
+
+Let's register a new role that, when granted, will allow another account
+access to the [metadata](/blockchain/metadata.md) in Mouse's account:
+
+```rust
+let role_id = RoleId::from_str("ACCESS_TO_MOUSE_METADATA")?;
+let role = iroha_data_model::role::Role::new(role_id.clone(), mouse_id.clone())
+ .add_permission(CanModifyAccountMetadata {
+ account: mouse_id.clone(),
+ });
+let register_role = Register::role(role);
+```
+
+### Grant a role
+
+After the role is registered, Mouse can grant it to Alice:
+
+```rust
+let grant_role = Grant::account_role(role_id, alice_id);
+let grant_role_tx = TransactionBuilder::new(chain_id, mouse_id)
+ .with_instructions([grant_role])
+ .sign(mouse_private_key);
+```
+
+## Permission Validators
+
+Permissions exist so that only accounts with the required permission token
+can perform a protected action. The default executor checks permissions
+during instruction, query, and expression execution.
+
+The default validator surface is grouped by ledger area:
+
+- peer management
+- domains and accounts
+- assets, NFTs, and escrows
+- triggers
+- roles and permissions
+- executor/runtime, proofs, bridges, and SORA/Nexus modules
+
+The exact token list is source-backed in the
+[Permission Tokens reference](/reference/permissions.md).
+
+### Runtime Validators
+
+Permission checks are enforced by the active executor. The default
+executor provides the built-in permission validators and token definitions,
+and a network can change policy by upgrading the executor it uses.
+
+Validators return a **validation verdict**. A validator can allow an
+operation, deny it with a reason, or skip it if the operation is outside of
+that validator's scope. The selected judge combines those verdicts to
+decide whether the instruction, query, or expression may proceed.
+
+## Supported Queries
+
+Permission tokens and roles can be queried.
+
+Queries for roles:
+
+- [`FindRoles`](/reference/queries.md#accounts-and-permissions)
+- [`FindRoleIds`](/reference/queries.md#accounts-and-permissions)
+- [`FindRolesByAccountId`](/reference/queries.md#accounts-and-permissions)
+
+Queries for permission tokens:
+
+- [`FindPermissionsByAccountId`](/reference/queries.md#accounts-and-permissions)
diff --git a/src/blockchain/queries.md b/src/blockchain/queries.md
new file mode 100644
index 000000000..5a647e397
--- /dev/null
+++ b/src/blockchain/queries.md
@@ -0,0 +1,116 @@
+
+
+# Queries
+
+Although much of the information about the state of the blockchain can be
+obtained, as we've shown before, using an event subscriber and a filter to
+narrow the scope of the events to those of interest, sometimes you need to
+take a more direct approach. Enter _queries_.
+
+Queries are small instruction-like objects that, when sent to an Iroha
+peer, prompt a response with details from the current world state view.
+
+This is not necessarily the only kind of information that is available on
+the network, but it's the only kind of information that is _guaranteed_ to
+be accessible on all networks.
+
+For each deployment of Iroha, there might be other available information.
+For example, the availability of telemetry data is up to the network
+administrators. It's entirely their decision whether or not they want to
+allocate processing power to track the work instead of using it to do the
+actual work. By contrast, some functions are always required, e.g. having
+access to your account balance.
+
+The results of queries can be [sorted](#sorting), [paginated](#pagination)
+and [filtered](#filters) peer-side all at once. Sorting is done
+lexicographically on metadata keys. Filtering can be done on a variety of
+principles, from domain-specific (individual IP address filter masks) to
+sub-string methods like `begins_with` combined using logical operations.
+
+## Try It on Taira
+
+Taira exposes read-only query helpers over JSON for common resources. Use them
+to practice pagination and response handling before wiring an SDK:
+
+```bash
+TAIRA_ROOT=https://taira.sora.org
+
+curl -fsS "$TAIRA_ROOT/v1/accounts?limit=3" \
+ | jq '{total, ids: [.items[].id]}'
+
+curl -fsS "$TAIRA_ROOT/v1/domains?limit=3" \
+ | jq '{total, domains: [.items[].id]}'
+
+curl -fsS "$TAIRA_ROOT/v1/assets/definitions?limit=3" \
+ | jq '{total, assets: [.items[] | {id, name, total_quantity}]}'
+```
+
+For app diagnostics, keep these smoke checks separate from signed transaction
+tests. A read-only query failure usually points to endpoint availability,
+network reachability, or route compatibility before it points to signer setup.
+
+## Create a query
+
+Use typed query builders from the SDK or CLI. For example, the current data
+model exposes `FindAccounts` for listing accounts:
+
+```rust
+let query = FindAccounts;
+```
+
+Here is an example of a query that finds Alice's assets:
+
+```rust
+let alice_id = load_canonical_account_id_from_client_config()?;
+let query = FindAssetsByAccountId::new(alice_id);
+```
+
+## Pagination
+
+For singular queries and small iterable queries, you can use `client.request`
+to submit a query and get the result in one go.
+
+However, broad iterable queries such as `FindAccounts`, `FindAssets`, or
+`FindBlocks` can return large result sets. Use pagination to reduce load on
+the peer and client.
+
+To construct a `Pagination`, you need to call
+`client.request_with_pagination(query, pagination)`, where the `pagination`
+is constructed as follows:
+
+```rust
+let starting_result: u32 = _;
+let limit: u32 = _;
+let pagination = Pagination::new(Some(starting_result), Some(limit));
+```
+
+## Filters
+
+When you create a query, you can use a filter to only return the results
+that match the specified filter.
+
+Filters are query-specific. For example, account queries can be narrowed by
+account identity or metadata, while asset queries can be narrowed by asset
+definition, holder account, or domain projection. Use the SDK's typed query
+builders where possible so the filter type matches the query output type.
+
+## Sorting
+
+Iroha can sort items with [metadata](/blockchain/metadata.md)
+lexicographically if you provide a key to sort by during the construction
+of the query. A typical use case is for accounts to have a `registered-on`
+metadata entry, which, when sorted, allows you to view the account
+registration history.
+
+Sorting only applies to entities that have
+[metadata](/blockchain/metadata.md), as the metadata key is used to
+sort query results.
+
+You can combine sorting with pagination and filters. Note that sorting is
+an optional feature, most queries with pagination won't need it.
+
+## Reference
+
+Check the [list of existing queries](/reference/queries.md) for detailed information about them.
diff --git a/src/blockchain/rwas.md b/src/blockchain/rwas.md
new file mode 100644
index 000000000..b9371332c
--- /dev/null
+++ b/src/blockchain/rwas.md
@@ -0,0 +1,506 @@
+# Real-World Assets
+
+Real-world assets (RWAs) model off-chain assets whose ownership or control
+is tracked on-chain. In Iroha, an RWA is a registered ledger lot with a
+generated identifier, an owner account, a quantity, business metadata,
+provenance, and optional lifecycle controls.
+
+RWAs are different from numeric asset balances:
+
+- a numeric asset is a fungible balance held by an account
+- an NFT is a unique on-chain record with one owner
+- an RWA is a lot that can carry business metadata, quantity, holds,
+ freezes, redemption state, provenance, and controller policy
+
+Use RWAs when the ledger needs to represent a specific off-chain lot
+instead of only a fungible balance.
+
+## RWA Lot
+
+An RWA lot contains:
+
+- `id`: the generated canonical RWA identifier, displayed as
+ `$`
+- `owned_by`: the account that currently owns the lot
+- `quantity`: the outstanding quantity represented by the lot
+- `spec`: quantity specification, such as decimal scale
+- `primary_reference`: the main off-chain receipt, certificate, invoice, or
+ registry reference
+- `status`: optional business status text
+- `metadata`: compact JSON fields used for business context and indexing
+- `parents`: source lots used to derive this lot
+- `controls`: controller accounts, controller roles, and enabled controller
+ operations
+- `is_frozen` and `held_quantity`: lifecycle state enforced by the runtime
+
+Keep the on-chain payload compact. Store large legal documents, inspection
+reports, and audit bundles outside the WSV, then put a digest, URI, SoraFS
+path, or manifest reference in RWA metadata.
+
+## Identifiers
+
+`RegisterRwa` does not accept a caller-chosen `id`, and it does not accept
+an `owner` field. The transaction authority becomes the initial `owned_by`
+account, and the runtime generates the `RwaId` in the target domain.
+
+The textual form of an RWA ID is:
+
+```text
+$
+```
+
+For example:
+
+```text
+0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef$commodities.universal
+```
+
+Applications should store their business identifier in `primary_reference`
+or `metadata`, then discover the generated `RwaId` from
+`RwaEvent::Created`, `FindRwas`, `/v1/rwas`, or the explorer route set
+after the transaction commits.
+
+## Lifecycle
+
+Common RWA workflows include:
+
+| Operation | Implemented behavior |
+| ------------------------------------------ | -------------------------------------------------------------------------------------------------------------------------- |
+| `RegisterRwa` | Create a generated-ID lot in a domain; the transaction authority becomes `owned_by`. |
+| `TransferRwa` | Move quantity to another account. A full transfer can change `owned_by`; a partial transfer creates a generated child lot. |
+| `HoldRwa` | Reserve quantity. Requires a configured controller and `hold_enabled`. |
+| `ReleaseRwa` | Remove held quantity. Requires a configured controller and `hold_enabled`. |
+| `FreezeRwa` | Block ordinary owner operations. Requires a configured controller and `freeze_enabled`. |
+| `UnfreezeRwa` | Re-enable ordinary owner operations. Requires a configured controller and `freeze_enabled`. |
+| `RedeemRwa` | Retire quantity. Requires the owner or a controller and `redeem_enabled`. |
+| `MergeRwas` | Combine quantities from parent lots with the same domain and spec into a generated child lot. |
+| `ForceTransferRwa` | Move quantity through a controller flow. Requires a configured controller and `force_transfer_enabled`. |
+| `SetRwaControls` | Replace the lot control policy. Requires the owner or a controller. |
+| `SetKeyValue` / `RemoveKeyValue` | Update lot metadata. Requires the owner or a controller; frozen lots require a controller. |
+
+There is no `UnregisterRwa` instruction in the current code. Retire an
+off-chain lot with `RedeemRwa` when the represented quantity is delivered,
+consumed, settled, or otherwise removed from circulation.
+
+## Metadata and Controls
+
+Use metadata for compact facts that help applications identify and verify
+the lot:
+
+- asset class, issuer, custodian, or registry reference
+- warehouse, vault, ISIN, invoice, or certificate identifiers
+- content hashes for attestations and legal documents
+- SoraFS paths or manifest references for larger evidence bundles
+- maturity, jurisdiction, or compliance tags used by off-chain services
+
+The implemented `RwaControlPolicy` has these fields:
+
+```json
+{
+ "controller_accounts": [],
+ "controller_roles": [],
+ "freeze_enabled": true,
+ "hold_enabled": true,
+ "force_transfer_enabled": false,
+ "redeem_enabled": true
+}
+```
+
+Controller accounts and roles are allowed to perform only the controller
+operations enabled by the corresponding boolean flag. The current control
+payload is not an allow-list transfer policy and does not contain nested
+`transfers` rules.
+
+## Queries, Events, and APIs
+
+Use [`FindRwas`](/reference/queries.md#assets-nfts-and-rwas) to list
+registered RWA lots. Applications that need live updates can subscribe to
+[`Rwa` data events](/blockchain/filters.md#data-event-filters) for created,
+owner-changed, split, merged, redeemed, frozen, unfrozen, held, released,
+force-transferred, controls-changed, and metadata events.
+
+Torii exposes chain-state routes such as `/v1/rwas` and `/v1/rwas/query`,
+plus explorer routes such as `/v1/explorer/rwas` and
+`/v1/explorer/rwas/{rwa_id}` when that route family is enabled. Generated
+clients should prefer the live
+[`/openapi`](/reference/torii-endpoints.md#common-endpoints) document for
+the exact response shape exposed by a node.
+
+### Try It on Taira
+
+Check whether public Taira currently has registered RWA lots:
+
+```bash
+curl -fsS 'https://taira.sora.org/v1/rwas?limit=5' \
+ | jq '{total, rwa_ids: [.items[].id]}'
+```
+
+List the RWA routes exposed by the live Taira OpenAPI document:
+
+```bash
+curl -fsS https://taira.sora.org/openapi.json \
+ | jq -r '.paths | keys[] | select(startswith("/v1/rwas") or startswith("/v1/explorer/rwas"))'
+```
+
+Empty `items` output is expected when no public lots have been registered yet.
+Registration, transfer, hold, freeze, and redemption are signed transactions.
+
+## Try It
+
+The examples below use the Python SDK surfaces from
+[Shared Setup](/guide/tutorials/python.md#shared-setup). Replace the
+account IDs, private keys, and generated lot IDs with values from your own
+network before submitting a transaction.
+
+### Discover RWA API Routes
+
+This read-only example asks a running Torii node which app-facing RWA
+routes are enabled:
+
+```python
+from iroha_python import create_torii_client
+
+client = create_torii_client("https://taira.sora.org")
+openapi = client.request_json("GET", "/openapi", expected_status=(200,))
+
+rwa_paths = sorted(
+ path for path in openapi.get("paths", {}) if path.startswith("/v1/rwas")
+)
+
+for path in rwa_paths:
+ print(path)
+```
+
+If the list is empty, the node may still support RWA instructions and
+queries through other Torii APIs, but it is not exposing the optional JSON
+route family.
+
+### Register a Warehouse Receipt
+
+Use a draft when one business action should become one signed transaction.
+The business receipt number goes in `primary_reference`; the ledger ID is
+generated after the transaction commits.
+
+```python
+from iroha_python import TransactionConfig, TransactionDraft
+
+config = TransactionConfig(
+ chain_id=CHAIN_ID,
+ authority=alice,
+ metadata={**TX_METADATA, "source": "rwa-docs"},
+)
+
+draft = TransactionDraft(config)
+draft.register_rwa(
+ {
+ "domain": "commodities.universal",
+ "quantity": "100",
+ "spec": {"scale": 0},
+ "primary_reference": "warehouse-receipt-001",
+ "status": "active",
+ "metadata": {
+ "asset_class": "commodity",
+ "commodity": "copper",
+ "warehouse": "DXB-01",
+ "inspection_report": "sorafs://reports/copper-001.json",
+ },
+ "parents": [],
+ "controls": {
+ "controller_accounts": [alice],
+ "controller_roles": [],
+ "freeze_enabled": True,
+ "hold_enabled": True,
+ "force_transfer_enabled": False,
+ "redeem_enabled": True,
+ },
+ }
+)
+
+envelope = draft.sign_with_keypair(alice_pair)
+client.submit_transaction_envelope_and_wait(envelope)
+```
+
+After the transaction commits, list generated RWA IDs. Chain-state routes
+expose the canonical IDs; use events or explorer detail routes when you
+need to match an ID back to `primary_reference` or metadata:
+
+```python
+page = client.list_rwas_typed(limit=20, offset=0)
+
+for lot in page.items:
+ print(lot.id)
+```
+
+Explorer-enabled nodes can also return richer projections:
+
+```python
+page = client.list_explorer_rwas_typed(domain="commodities.universal")
+
+for lot in page.items:
+ print(lot.id, lot.primary_reference, lot.owned_by, lot.quantity)
+```
+
+### Transfer With a Temporary Hold
+
+Use the generated RWA ID returned by the chain. This example assumes
+`alice` is the owner and is also configured as a controller with
+`hold_enabled`.
+
+```python
+warehouse_lot_id = (
+ "0123456789abcdef0123456789abcdef"
+ "0123456789abcdef0123456789abcdef$commodities.universal"
+)
+
+draft = TransactionDraft(
+ TransactionConfig(chain_id=CHAIN_ID, authority=alice, metadata=TX_METADATA)
+)
+
+draft.transfer_rwa(warehouse_lot_id, quantity="10", destination=bob)
+draft.hold_rwa(warehouse_lot_id, quantity="5")
+
+envelope = draft.sign_with_keypair(alice_pair)
+client.submit_transaction_envelope_and_wait(envelope)
+```
+
+Release the hold when the off-chain process is complete:
+
+```python
+draft = TransactionDraft(
+ TransactionConfig(chain_id=CHAIN_ID, authority=alice, metadata=TX_METADATA)
+)
+draft.release_rwa(warehouse_lot_id, quantity="5")
+
+envelope = draft.sign_with_keypair(alice_pair)
+client.submit_transaction_envelope_and_wait(envelope)
+```
+
+### Add Controls and Audit Metadata
+
+Controls and metadata are separate. Use controls for controller policy, and
+metadata for facts that applications or auditors need to display:
+
+```python
+draft = TransactionDraft(
+ TransactionConfig(chain_id=CHAIN_ID, authority=alice, metadata=TX_METADATA)
+)
+
+draft.set_rwa_controls(
+ warehouse_lot_id,
+ {
+ "controller_accounts": [alice],
+ "controller_roles": [],
+ "freeze_enabled": True,
+ "hold_enabled": True,
+ "force_transfer_enabled": True,
+ "redeem_enabled": True,
+ },
+)
+draft.set_rwa_key_value(warehouse_lot_id, "auditor", "alice")
+draft.set_rwa_key_value(
+ warehouse_lot_id,
+ "proof_hash",
+ "sha256:2b1c7a4e...",
+)
+
+envelope = draft.sign_with_keypair(alice_pair)
+client.submit_transaction_envelope_and_wait(envelope)
+```
+
+### Redeem or Retire Quantity
+
+Redeem quantity when the represented off-chain asset has been delivered,
+consumed, retired, or otherwise removed from circulation. The lot must have
+`redeem_enabled`, and the signer must be the owner or a controller.
+
+```python
+draft = TransactionDraft(
+ TransactionConfig(chain_id=CHAIN_ID, authority=alice, metadata=TX_METADATA)
+)
+draft.redeem_rwa(warehouse_lot_id, quantity="1")
+
+envelope = draft.sign_with_keypair(alice_pair)
+client.submit_transaction_envelope_and_wait(envelope)
+```
+
+### Freeze During Compliance Review
+
+Freeze a lot when an off-chain review must block ordinary owner operations.
+The signer must be a controller and the lot must have `freeze_enabled`.
+
+```python
+draft = TransactionDraft(
+ TransactionConfig(chain_id=CHAIN_ID, authority=alice, metadata=TX_METADATA)
+)
+draft.freeze_rwa(warehouse_lot_id)
+draft.set_rwa_key_value(
+ warehouse_lot_id,
+ "review",
+ {
+ "status": "frozen",
+ "reason": "custodian inventory check",
+ "case_id": "OPS-2026-0042",
+ },
+)
+
+envelope = draft.sign_with_keypair(alice_pair)
+client.submit_transaction_envelope_and_wait(envelope)
+```
+
+Unfreeze it when the review passes:
+
+```python
+draft = TransactionDraft(
+ TransactionConfig(chain_id=CHAIN_ID, authority=alice, metadata=TX_METADATA)
+)
+draft.unfreeze_rwa(warehouse_lot_id)
+draft.set_rwa_key_value(
+ warehouse_lot_id,
+ "review",
+ {"status": "cleared", "case_id": "OPS-2026-0042"},
+)
+
+envelope = draft.sign_with_keypair(alice_pair)
+client.submit_transaction_envelope_and_wait(envelope)
+```
+
+### Invoice Receivable
+
+Represent an invoice as an RWA lot by storing the invoice number in
+`primary_reference` and metadata. After registration, use the generated ID
+for transfer and redemption.
+
+```python
+draft = TransactionDraft(
+ TransactionConfig(chain_id=CHAIN_ID, authority=alice, metadata=TX_METADATA)
+)
+draft.register_rwa(
+ {
+ "domain": "receivables.universal",
+ "quantity": "50000",
+ "spec": {"scale": 2},
+ "primary_reference": "INV-2026-0007",
+ "status": "issued",
+ "metadata": {
+ "asset_class": "invoice",
+ "currency": "USD",
+ "debtor": "example-buyer",
+ "due_date": "2026-06-30",
+ "document_hash": "sha256:4df4c8...",
+ },
+ "parents": [],
+ "controls": {
+ "controller_accounts": [alice],
+ "controller_roles": [],
+ "freeze_enabled": True,
+ "hold_enabled": False,
+ "force_transfer_enabled": False,
+ "redeem_enabled": True,
+ },
+ }
+)
+
+envelope = draft.sign_with_keypair(alice_pair)
+client.submit_transaction_envelope_and_wait(envelope)
+```
+
+When the receivable is financed or paid, use the generated invoice lot ID:
+
+```python
+invoice_lot_id = (
+ "fedcba9876543210fedcba9876543210"
+ "fedcba9876543210fedcba9876543210$receivables.universal"
+)
+
+draft = TransactionDraft(
+ TransactionConfig(chain_id=CHAIN_ID, authority=alice, metadata=TX_METADATA)
+)
+draft.transfer_rwa(invoice_lot_id, quantity="50000", destination=bob)
+
+envelope = draft.sign_with_keypair(alice_pair)
+client.submit_transaction_envelope_and_wait(envelope)
+```
+
+Redeem the represented amount after off-chain settlement:
+
+```python
+draft = TransactionDraft(
+ TransactionConfig(chain_id=CHAIN_ID, authority=bob, metadata=TX_METADATA)
+)
+draft.redeem_rwa(invoice_lot_id, quantity="50000")
+
+envelope = draft.sign_with_keypair(bob_pair)
+client.submit_transaction_envelope_and_wait(envelope)
+```
+
+### Carbon Credit Retirement
+
+Use redemption to retire credits after they are claimed. The metadata
+points to the off-chain certificate or registry proof:
+
+```python
+carbon_lot_id = (
+ "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"
+ "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa$carbon.universal"
+)
+
+draft = TransactionDraft(
+ TransactionConfig(chain_id=CHAIN_ID, authority=alice, metadata=TX_METADATA)
+)
+draft.redeem_rwa(carbon_lot_id, quantity="250")
+draft.set_rwa_key_value(
+ carbon_lot_id,
+ "retirement_certificate",
+ "sorafs://certificates/carbon-credit-2026-001-retired.json",
+)
+
+envelope = draft.sign_with_keypair(alice_pair)
+client.submit_transaction_envelope_and_wait(envelope)
+```
+
+### Merge Two Lots
+
+Merge lots when two off-chain positions are consolidated. The parents must
+be in the same domain and use the same quantity spec. The runtime generates
+the child lot ID.
+
+```python
+warehouse_lot_id_2 = (
+ "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb"
+ "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb$commodities.universal"
+)
+
+draft = TransactionDraft(
+ TransactionConfig(chain_id=CHAIN_ID, authority=alice, metadata=TX_METADATA)
+)
+draft.merge_rwas(
+ {
+ "parents": [
+ {"rwa": warehouse_lot_id, "quantity": "40"},
+ {"rwa": warehouse_lot_id_2, "quantity": "60"},
+ ],
+ "primary_reference": "warehouse-receipt-003",
+ "status": "merged",
+ "metadata": {
+ "asset_class": "commodity",
+ "commodity": "copper",
+ "warehouse": "DXB-01",
+ "merge_reason": "same custodian and quality grade",
+ },
+ }
+)
+
+envelope = draft.sign_with_keypair(alice_pair)
+client.submit_transaction_envelope_and_wait(envelope)
+```
+
+For the full Python transaction example, see
+[Real-World Assets](/guide/tutorials/python.md#real-world-assets).
+
+## Related Docs
+
+- [Assets](/blockchain/assets.md)
+- [Metadata](/blockchain/metadata.md)
+- [Iroha Special Instructions](/blockchain/instructions.md)
+- [Queries](/reference/queries.md#assets-nfts-and-rwas)
+- [Torii endpoints](/reference/torii-endpoints.md#app-and-sora-route-families)
diff --git a/src/blockchain/sora-nexus-services.md b/src/blockchain/sora-nexus-services.md
new file mode 100644
index 000000000..9ebd76461
--- /dev/null
+++ b/src/blockchain/sora-nexus-services.md
@@ -0,0 +1,957 @@
+# SORA Nexus Services
+
+SORA Nexus adds app-facing service planes around Iroha 3. These services
+are not separate ledgers. They are anchored by Iroha world state, Norito
+manifests, governance records, and Torii route families.
+
+Availability depends on the node build and network profile. Use
+[`/openapi`](/reference/torii-endpoints.md#app-and-sora-route-families) on
+the target node as the authoritative list of enabled routes.
+
+## Component Map
+
+| Component | Role | Main surfaces |
+| ---------------------- | ------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------- |
+| Soracloud | Application deployment, hosted services, private model/runtime state, and service lifecycle control. | `/v1/soracloud/*`, `/api/*`, `iroha app soracloud ...` |
+| Inrou | Soracloud hosted HTTP runtime for service revisions that need a live HTTP plane. | Soracloud runtime config, host capability adverts, replica runtime state |
+| SoraNet | Privacy and transport overlay for circuits, relay traffic, VPN, Connect sessions, and streaming routes. | `/v1/connect/*`, `/v1/vpn/*`, SoraNet route metadata |
+| Data Availability (DA) | Availability evidence, commitment, and pin-intent layer for payloads that are referenced by Nexus lanes, SoraFS manifests, and proof flows. | `/v1/da/*`, `FindDaPinIntent*`, `[sumeragi.da]` |
+| SoraFS | Content-addressed storage fabric for manifests, CAR payloads, pinned content, gateway fetches, and proof-of-retrievability flows. | `/v1/sorafs/*`, `/sorafs/*`, `FindSorafsProviderOwner` |
+| SoraDNS | Deterministic naming and resolver-attestation layer for SORA-hosted services and content. | `/v1/soradns/*`, `/soradns/*`, resolver directory events |
+| Aitai | App-level fiat and asset settlement corridor backed by native escrow records, not by a separate ledger. | `OpenAssetEscrow`, `FindAssetEscrow*`, `EscrowEventFilter`, Kotodama `escrow_*` builtins |
+
+```mermaid
+flowchart LR
+ app["Application or user"] --> dns["SoraDNS name resolution"]
+ app --> aitai["Aitai escrow app"]
+ dns --> route["Soracloud route"]
+ dns --> content["SoraFS content gateway"]
+ route --> ivm["Deterministic IVM service"]
+ route --> inrou["Inrou hosted HTTP service"]
+ aitai --> escrow["Native escrow records"]
+ content --> da["DA pin intents and commitments"]
+ da --> storage["SoraFS providers"]
+ app --> net["SoraNet private route"]
+ net --> content
+ net --> route
+ ledger["Iroha world state and governance"] --> dns
+ ledger --> route
+ ledger --> content
+ ledger --> da
+ escrow --> ledger
+```
+
+## Common Flows
+
+### Hosted Split Application
+
+A typical mixed-plane app uses all of the pieces together:
+
+1. Static frontend assets are packaged and pinned through SoraFS.
+2. The public host, for example `.sora`, is registered through
+ SoraDNS.
+3. Soracloud routes `/api/v1/search` or `/api/v1/stream` to an Inrou HTTP
+ service.
+4. Soracloud routes `/api/auth` and `/api/v1/user` to deterministic IVM
+ handlers.
+5. Clients that need privacy can reach the same content or API route
+ through a SoraNet circuit.
+
+| Path | Backing plane | Why |
+| ----------------- | --------------------- | ------------------------------------------------- |
+| `/` | SoraFS static content | Reproducible content root and gateway caching |
+| `/assets/*` | SoraFS static content | Content-addressed assets and manifest proofs |
+| `/api/auth*` | Soracloud IVM | Replay-safe auth and wallet challenge state |
+| `/api/v1/user*` | Soracloud IVM | Governance-sensitive state mutations |
+| `/api/v1/search*` | Soracloud Inrou | Live HTTP service, cache, SSE, or collector state |
+
+### Content Publication
+
+SoraFS publication produces durable artifacts before a name points at them:
+
+1. Build a payload or directory.
+2. Pack it into a CAR archive and chunk plan.
+3. Build a Norito manifest with pin policy and governance data.
+4. Submit the manifest to Torii.
+5. Record a DA pin intent or availability commitment when the target
+ profile requires explicit evidence.
+6. Bind the manifest to a SoraDNS name or Soracloud static frontend route.
+
+### Private Fetch or Streaming Route
+
+SoraNet can sit in front of SoraFS or Soracloud:
+
+1. The client resolves the name or manifest.
+2. A guard directory or route manifest chooses entry and exit relays.
+3. Traffic is padded and sent through the SoraNet circuit.
+4. The exit relay reaches the SoraFS gateway, Torii stream, or Soracloud
+ route.
+
+## Aitai
+
+Aitai is the SORA app corridor for marketplace-style settlement where a
+buyer and seller coordinate an off-chain payment while Iroha controls the
+on-chain asset custody. It should use the native escrow instruction family
+instead of a contract-owned escrow account for new numeric-asset custody
+flows.
+
+Native escrow keeps custody in the ledger:
+
+1. The seller opens an offer with `OpenAssetEscrow`, selecting an
+ `EscrowId`, asset definition, amount, and optional evidence hashes.
+2. Iroha moves the seller's numeric asset into a deterministic protocol
+ custody account and records an `AssetEscrowRecord`.
+3. The buyer accepts with `AcceptAssetEscrow` and marks the off-chain
+ payment as sent with `MarkEscrowPaymentSent`.
+4. The seller releases the funds with `ReleaseAssetEscrow`, cancels before
+ payment is marked with `CancelAssetEscrow`, or a party opens a dispute
+ with `OpenEscrowDispute`.
+5. A resolver with `CanResolveEscrowDispute` can close a disputed escrow
+ with `ResolveEscrowDispute`, splitting the locked amount between buyer
+ and seller.
+
+While an escrow is active, generic asset debits from the custody account
+are rejected. Release, cancellation, and dispute resolution are the
+intended custody exit paths. Evidence fields store hashes, not invoice
+files, chat logs, or other off-chain payloads; publish larger evidence
+bundles through SoraFS or another audited storage path and attach the
+digest to the escrow.
+
+| Aitai surface | Use it for |
+| ------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------- |
+| `OpenAssetEscrow`, `AcceptAssetEscrow`, `MarkEscrowPaymentSent`, `ReleaseAssetEscrow`, `CancelAssetEscrow` | Transparent numeric asset offers, including XOR-denominated settlement flows. |
+| `OpenAnonymousAssetEscrow`, `AcceptAnonymousAssetEscrow`, `MarkAnonymousEscrowPaymentSent`, `ReleaseAnonymousAssetEscrow`, `CancelAnonymousAssetEscrow` | Shielded offers where the funding and closing movements are carried by proof attachments. |
+| `OpenEscrowDispute`, `ResolveEscrowDispute`, `OpenAnonymousEscrowDispute`, `ResolveAnonymousEscrowDispute` | Dispute entry and court-style resolution. |
+| `FindAssetEscrowById`, `FindAssetEscrowsBySeller`, `FindAssetEscrowsByBuyer`, `FindAssetEscrowsByStatus` | App status pages, reconciliation jobs, and support tooling. |
+| `EscrowEventFilter` | Live lifecycle subscriptions by escrow id, seller, buyer, status, or event kind. |
+| Kotodama `escrow_open_offer`, `escrow_accept`, `escrow_mark_payment_sent`, `escrow_release`, `escrow_cancel`, `escrow_open_dispute`, `escrow_resolve_dispute` | Contract wrapper calls that still need IVM/Kotodama compatibility. |
+
+An SDK-backed transparent offer follows this shape:
+
+```rust
+use iroha::data_model::{
+ isi::escrow::{
+ AcceptAssetEscrow, MarkEscrowPaymentSent, OpenAssetEscrow,
+ ReleaseAssetEscrow,
+ },
+ prelude::*,
+};
+use iroha_crypto::Hash;
+
+let escrow_id = EscrowId::new(Hash::new("aitai-offer-001"));
+let asset_definition_id: AssetDefinitionId =
+ "62Fk4FPcMuLvW5QjDGNF2a4jAmjM".parse()?;
+
+seller_client.submit_blocking(OpenAssetEscrow::with_evidence_hashes(
+ escrow_id,
+ asset_definition_id,
+ Numeric::from(40_u64),
+ vec![Hash::new("fiat-invoice")],
+))?;
+
+buyer_client.submit_blocking(AcceptAssetEscrow::new(escrow_id))?;
+buyer_client.submit_blocking(MarkEscrowPaymentSent::new(escrow_id))?;
+seller_client.submit_blocking(ReleaseAssetEscrow::new(escrow_id))?;
+
+let record = seller_client.query_single(FindAssetEscrowById::new(escrow_id))?;
+assert_eq!(record.status, AssetEscrowStatus::Released);
+```
+
+For public Taira or Minamoto usage, treat the off-chain payment rail and
+any support or court workflow as application policy. Iroha records the
+custody state, lifecycle events, evidence hashes, and final asset movement;
+it does not verify fiat settlement by itself.
+
+## Check a Target Node
+
+Before using examples from this page, confirm that the route family exists
+on the node you are targeting:
+
+```bash
+export TORII_URL=https://taira.sora.org
+
+curl -fsS "$TORII_URL/openapi.json" \
+ | jq '.paths | keys[] | select(test("^/v1/(soracloud|sorafs|soradns|connect|vpn|da)/"))'
+
+curl -fsS "$TORII_URL/status" | jq .
+```
+
+If `/openapi.json` is not exposed by the profile, try `/openapi`. Exact
+route availability depends on build features and network configuration.
+
+### Taira Read-Only Smoke Checks
+
+The public Taira endpoint is useful for read-side checks, but do not use it
+for mutating examples unless you are operating an authorized account and
+intend to change live state.
+
+```bash
+export TORII_URL=https://taira.sora.org
+
+curl -fsS "$TORII_URL/status" \
+ | jq '{version: .build.version, peers, blocks, lanes: (.teu_lane_commit | length)}'
+
+curl -fsS "$TORII_URL/v1/connect/status" | jq '{enabled, sessions_active}'
+
+curl -fsS "$TORII_URL/v1/vpn/profile" \
+ | jq '{available, relay_endpoint, supported_exit_classes}'
+
+curl -fsS "$TORII_URL/v1/sorafs/storage/state" \
+ | jq '{bytes_capacity, bytes_used, pin_queue_depth, por_inflight}'
+
+curl -fsS -H 'Accept: application/json' "$TORII_URL/v1/soracloud/status" \
+ | jq '.control_plane | {service_count, services: [.services[] | {service_name, current_version}]}'
+```
+
+Taira may expose compatibility or control-plane routes that are not listed
+in the OpenAPI path map. Treat `/openapi` as the primary generated API
+contract, then confirm any compatibility route directly before documenting
+it as live.
+
+## Soracloud
+
+Soracloud is the SORA application control plane. It tracks deployment
+bundles, service revisions, routing, rollout state, authoritative config
+entries, encrypted service secrets, model registry records, private
+inference sessions, and runtime receipts.
+
+Soracloud uses two execution planes:
+
+| Execution plane | Runtime | Use it for |
+| ---------------------- | ------- | -------------------------------------------------------------------------------------------- |
+| `DeterministicService` | `Ivm` | Auth, vault state, certified reads, ordered mailbox handlers, governance-sensitive mutations |
+| `HttpService` | `Inrou` | Live HTTP APIs, collector-heavy work, cache-backed services, SSE, browser-assisted flows |
+
+The control plane is authoritative. Deploy, upgrade, rollback, config,
+secret, model, and status commands submit through Torii and read committed
+world state; they do not rely on a separate CLI-local mirror. Public
+routing is longest-prefix based, so one registered host can split traffic
+between hosted HTTP routes and deterministic API routes.
+
+### Scaffold a Split App
+
+The split-app template creates a static frontend plus one hosted live API
+and one deterministic vault/API service:
+
+```bash
+iroha app soracloud app init \
+ --template split-app \
+ --app-name solswap_indexer \
+ --app-version 0.1.0 \
+ --public-host solswap-indexer.sora \
+ --output-dir ./apps/solswap-indexer
+
+iroha app soracloud app local-plan \
+ --manifest ./apps/solswap-indexer/app_manifest.json
+
+iroha app soracloud app doctor \
+ --manifest ./apps/solswap-indexer/app_manifest.json
+```
+
+`local-plan` prints the route split, child service manifests, workspace
+script paths, and the expected frontend publication mode. `doctor`
+validates the local release contract before you involve Torii.
+
+### Deploy and Inspect App State
+
+```bash
+export SORACLOUD_TORII_URL=https://
+
+iroha app soracloud app deploy \
+ --manifest ./apps/solswap-indexer/app_manifest.json \
+ --torii-url "$SORACLOUD_TORII_URL"
+
+iroha app soracloud app status \
+ --manifest ./apps/solswap-indexer/app_manifest.json \
+ --torii-url "$SORACLOUD_TORII_URL"
+```
+
+For an already deployed service, use service-scoped commands:
+
+```bash
+iroha app soracloud status \
+ --service-name solswap_indexer_live \
+ --torii-url "$SORACLOUD_TORII_URL"
+
+iroha app soracloud rollback \
+ --service-name solswap_indexer_live \
+ --target-version 0.1.0 \
+ --torii-url "$SORACLOUD_TORII_URL"
+```
+
+### Config and Secret Material
+
+Soracloud config and secret entries are part of authoritative deployment
+state. Deploy, upgrade, and rollback fail closed when required config or
+secret bindings are missing or inconsistent with the active manifests.
+
+```bash
+iroha app soracloud config-set \
+ --service-name solswap_indexer_live \
+ --config-name indexer/public_config \
+ --value-file ./config/public-config.json \
+ --torii-url "$SORACLOUD_TORII_URL"
+
+iroha app soracloud secret-set \
+ --service-name solswap_indexer_live \
+ --secret-name indexer/api_key \
+ --secret-file ./secrets/api-key.envelope.json \
+ --torii-url "$SORACLOUD_TORII_URL"
+```
+
+Use the CLI help for the exact credential flags required by your profile:
+
+```bash
+iroha app soracloud config-set --help
+iroha app soracloud secret-set --help
+```
+
+## Inrou
+
+Inrou is the hosted HTTP runtime used by Soracloud. An Iroha node with the
+embedded Soracloud runtime projects admitted Soracloud state into a local
+materialization plan, starts assigned hosted-service replicas as loopback
+services, and reports replica runtime state back into the authoritative
+model.
+
+Use Inrou for workloads that need a live HTTP surface, such as
+collector-heavy APIs, SSE streams, cache-backed handlers, or
+browser-assisted services.
+
+### Runtime Requirements
+
+- Container manifest runtime must be `Inrou`.
+- Service manifest execution plane must be `HttpService`.
+- `HttpService + Inrou` requires exactly one `PersistentRootLeaseVolume`
+ mounted at `/`.
+- Replicated Inrou services also need shared service or confidential lease
+ storage when they retain mutable shared state.
+- Production hosting nodes should advertise real Inrou capacity instead of
+ operating only as a proxy.
+
+### Manifest Fragment
+
+The example below shows the shape of the two manifests. It is a fragment,
+not a complete deployment bundle.
+
+```jsonc
+// container_manifest.json
+{
+ "schema_version": 1,
+ "runtime": { "runtime": "Inrou", "value": null },
+ "bundle_path": "/bundles/solswap-indexer.inrou",
+ "entrypoint": "/app/bin/launch-indexer.sh",
+ "args": [],
+ "env": {
+ "RUST_LOG": "info",
+ },
+ "inrou": {
+ "schema_version": 1,
+ "guest_os": { "guest_os": "DebianSlim", "value": null },
+ "guest_images": {
+ "x86_64": {
+ "kernel_image_path": "/inrou/x86_64/vmlinux",
+ "rootfs_image_path": "/inrou/x86_64/rootfs.ext4",
+ "initrd_image_path": null,
+ },
+ "aarch64": {
+ "kernel_image_path": "/inrou/aarch64/vmlinux",
+ "rootfs_image_path": "/inrou/aarch64/rootfs.ext4",
+ "initrd_image_path": null,
+ },
+ },
+ },
+ "lifecycle": {
+ "start_grace_secs": 60,
+ "stop_grace_secs": 30,
+ "healthcheck_path": "/api/indexer/v1/health",
+ },
+}
+```
+
+```jsonc
+// service_manifest.json
+{
+ "schema_version": 1,
+ "service_name": "solswap_indexer_live",
+ "service_version": "0.1.0",
+ "execution_plane": { "execution_plane": "HttpService", "value": null },
+ "replicas": 2,
+ "route": {
+ "host": "solswap-indexer.sora",
+ "path_prefix": "/api/v1/search",
+ "service_port": 8080,
+ "visibility": { "visibility": "Public", "value": null },
+ "tls_mode": { "tls": "Required", "value": null },
+ },
+ "lease_volumes": [
+ {
+ "volume_name": "root_disk",
+ "kind": {
+ "lease_volume": "PersistentRootLeaseVolume",
+ "value": null,
+ },
+ "storage_class": { "storage_class": "Warm", "value": null },
+ "mount_path": "/",
+ "max_total_bytes": 8589934592,
+ },
+ {
+ "volume_name": "index_state",
+ "kind": { "lease_volume": "ServiceLeaseVolume", "value": null },
+ "storage_class": { "storage_class": "Warm", "value": null },
+ "mount_path": "/var/lib/solswap-indexer",
+ "max_total_bytes": 1073741824,
+ },
+ ],
+}
+```
+
+At runtime, each mounted lease volume is exposed through environment
+variables derived from the volume name:
+
+```text
+SORACLOUD_LEASE_VOLUME_ROOT_DISK_DIR
+SORACLOUD_LEASE_VOLUME_ROOT_DISK_MOUNT_PATH
+SORACLOUD_LEASE_VOLUME_INDEX_STATE_DIR
+SORACLOUD_LEASE_VOLUME_INDEX_STATE_MOUNT_PATH
+```
+
+## SoraNet
+
+SoraNet is the privacy and transport overlay. It provides relay-based
+routes for traffic that should not connect directly to the target gateway
+or service. The transport design uses entry, middle, and exit relay roles,
+QUIC transport, a Noise-based hybrid handshake, capability negotiation,
+relay directory metadata, and fixed-size padded cells.
+
+In Nexus deployments, SoraNet can carry content fetches, gateway traffic,
+VPN or Connect sessions, and Norito streaming routes. Directory entries can
+mark relays that support `norito-stream`, which lets clients prefer routes
+suitable for Torii RPC or streaming traffic.
+
+### Streaming Configuration
+
+The Nexus profile enables SoraNet provisioning for streaming routes:
+
+```toml
+[streaming]
+feature_bits = 0b11
+
+[streaming.soranet]
+enabled = true
+exit_multiaddr = "/dns/torii/udp/9443/quic"
+padding_budget_ms = 25
+access_kind = "authenticated"
+provision_spool_dir = "./storage/streaming/soranet_routes"
+provision_spool_max_bytes = 0
+provision_window_segments = 4
+provision_queue_capacity = 256
+```
+
+Use `access_kind = "read-only"` for content routes that do not require
+viewer authentication. Use `authenticated` when the exit relay must enforce
+tickets or viewer identity before bridging to Torii or a hosted service.
+
+### SoraNet-Aware SoraFS Fetch
+
+The SoraFS fetch CLI can emit a local proxy manifest and spool SoraNet
+route metadata for browser extensions or SDK adapters:
+
+```bash
+sorafs_cli fetch \
+ --plan artifacts/payload_plan.json \
+ --manifest-id 7bb2...9d31 \
+ --provider name=alpha,provider-id=9f5c...73aa,base-url=https://gw-alpha.example.org/,stream-token="$(cat alpha.token)" \
+ --output artifacts/payload.bin \
+ --json-out artifacts/fetch_summary.json \
+ --local-proxy-manifest-out artifacts/proxy_manifest.json \
+ --local-proxy-mode bridge \
+ --local-proxy-norito-spool storage/streaming/soranet_routes \
+ --local-proxy-kaigi-spool storage/streaming/soranet_routes \
+ --local-proxy-kaigi-policy authenticated \
+ --max-peers=2 \
+ --retry-budget=4
+```
+
+The summary records provider reports, chunk receipts, local proxy metadata,
+and the effective route settings used for the fetch.
+
+## Data Availability (DA)
+
+DA is the availability-evidence layer for payloads that are too large, too
+privacy-sensitive, or too service-specific to place directly in world
+state. It records deterministic commitments and retrieval obligations so
+validators, gateways, and clients can agree on which bytes were promised,
+which policy applies, and which evidence has been observed.
+
+DA does not replace Kura or SoraFS:
+
+- Kura stores the finalized block stream and consensus recovery data.
+- SoraFS stores and serves content-addressed bytes, CAR payloads, and
+ manifests.
+- DA records commitments, proof policies, proof openings, and pin intents
+ that let those bytes be scheduled, audited, and linked back to ledger
+ state.
+
+Use DA when an application or Nexus lane needs a ledger-visible promise
+that off-chain data remains retrievable. Common examples include lane
+payload commitments for settlement flows, SoraFS pin intents for published
+content, proof bundles that must be retained for later verification, and
+application artifacts whose public state should be a digest rather than the
+full payload.
+
+### Lifecycle
+
+| Stage | What is recorded |
+| ---------- | ----------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Intent | A ticket, manifest reference, alias, lane/epoch/sequence reference, retention policy, or replication target. |
+| Commitment | Digest material that binds the manifest, lane payload, proof bundle, or content root to the ledger-visible record. |
+| Evidence | Availability votes, proof openings, provider attestations, or other profile-specific evidence accepted by the target network. |
+| Query | Pin-intent lookups through `FindDaPinIntentByTicket`, `FindDaPinIntentByManifest`, `FindDaPinIntentByAlias`, or `FindDaPinIntentByLaneEpochSequence`. |
+
+A typical DA-backed publication flow is:
+
+1. Build or receive the payload outside the WSV, for example a SoraFS CAR
+ file or Nexus lane payload.
+2. Hash and describe the payload in a Norito manifest or route-specific
+ commitment record.
+3. Submit the manifest, pin intent, or commitment through `/v1/da/*` when
+ that route family is enabled, or through the network's signed
+ transaction path.
+4. Let validators or availability providers collect the evidence required
+ by the active proof policy.
+5. Query the resulting pin intent or commitment before promoting an alias,
+ settlement proof, or gateway route that depends on the payload.
+
+### Algorithmic Model
+
+DA turns a payload into a signed, replay-protected, block-indexed commitment.
+The important algorithms are deterministic so validators and gateways can
+recompute the same digests from the same bytes.
+
+1. **Canonicalize the submitted payload.** Torii accepts an ingest request with
+ `(lane_id, epoch, sequence)`, payload bytes, compression metadata, chunk
+ size, erasure profile, retention policy, and submitter signature. The node
+ decompresses gzip, deflate, or Zstandard payloads when requested, then
+ verifies that the canonical byte length equals `total_size`.
+2. **Validate lane and chunk parameters.** The lane must exist in the Nexus
+ lane catalog. `chunk_size` must be a non-zero power of two, at least two
+ bytes, and no larger than the configured maximum. The erasure profile must
+ include data shards and at least two parity shards. The lane catalog selects
+ the proof scheme, either `merkle_sha256` or `kzg_bls12_381`.
+3. **Apply network policy.** The node enforces the configured replication and
+ retention baseline for the blob class. Public metadata must stay plaintext;
+ governance-only metadata is encrypted with the node's configured governance
+ metadata key before it is written into the manifest.
+4. **Chunk and commit.** The canonical payload is chunked with a fixed-size
+ profile derived from `chunk_size`. Torii computes the payload digest, the
+ proof-of-retrievability tree root, and per-chunk commitments. Data chunks
+ carry BLAKE3 commitments over their bytes.
+5. **Add erasure commitments.** Chunks are grouped into stripes of
+ `data_shards`. Missing cells in the final stripe are zero padded for parity
+ calculation. RS(16) parity creates row/global parity shards; optional
+ `row_parity_stripes` add column-style stripe parity across the matrix.
+ Parity shard commitments are BLAKE3 digests of little-endian `u16` symbols.
+6. **Build the manifest.** `DaManifestV1` records the lane, epoch, blob class,
+ codec, payload digest, chunk root, chunk size, erasure profile, retention
+ policy, rent quote, chunk commitments, optional IPA commitment, metadata,
+ and issue time. The storage ticket is deterministic: the node first hashes a
+ manifest template with an empty ticket, then writes that fingerprint back as
+ the final `storage_ticket`.
+7. **Reject replay conflicts.** The replay key is
+ `(lane_id, epoch, sequence, manifest_fingerprint)`. A duplicate with the
+ same fingerprint is idempotent. A stale sequence or the same sequence with a
+ different fingerprint is rejected.
+8. **Emit signed artifacts.** Torii computes a PDP commitment, signs a
+ `DaIngestReceipt`, builds a `DaCommitmentRecord`, and writes spool artifacts
+ for the manifest, PDP commitment, commitment record, commitment schedule,
+ pin intent, receipt file, and receipt log. The receipt cursor advances
+ monotonically per `(lane_id, epoch)`.
+
+Commitment records are what blocks carry. A record binds:
+
+- lane, epoch, and sequence
+- caller blob ID and canonical manifest hash
+- lane proof scheme
+- chunk root
+- optional KZG commitment for KZG lanes
+- PDP/proof digest
+- retention class and storage ticket
+- Torii DA acknowledgement signature
+
+Before a block embeds DA records, the block assembly path validates the bundle:
+
+- `(lane_id, epoch, sequence)` must be unique inside the bundle.
+- Manifest hashes must be non-zero and unique inside the bundle.
+- The commitment proof scheme must match the configured lane policy.
+- Merkle lanes reject KZG commitments; KZG lanes require a non-zero KZG
+ commitment.
+- Pin intents are canonicalized, sorted, and filtered by lane, manifest hash,
+ storage ticket, owner account, and alias-collision rules.
+
+The block header stores hashes for DA proof policies, commitments, and pin
+intents. For membership proofs, the commitment bundle also exposes a Merkle
+root whose leaves are hashes of canonical Norito-encoded
+`DaCommitmentRecord` values. Parent nodes hash the concatenation of left and
+right children; an odd leaf is promoted unchanged to the next layer.
+
+### Proof Verification
+
+`/v1/da/commitments/prove` can produce a proof for one commitment in a block.
+The proof contains the commitment, block height, index in the bundle, bundle
+hash, bundle length, Merkle root, and sibling path. Verification checks:
+
+1. The proof bundle hash matches the block header's DA commitment hash.
+2. The proof block height matches the referenced block header.
+3. The index is in bounds and the commitment equals the bundle entry at that
+ index.
+4. The lane proof policy accepts the commitment.
+5. Folding the sibling path from the commitment leaf reconstructs the supplied
+ root.
+6. The reconstructed root equals the bundle root.
+
+This proves that a specific availability commitment was included in a specific
+block payload; it does not prove that every replica is currently online. Live
+retrievability is checked separately through SoraFS provider fetches, PDP/PoTR
+checks, or profile-specific availability evidence.
+
+### Consensus Interaction
+
+DA is coupled to Sumeragi through reliable broadcast (RBC), but it is not a
+second finality protocol. RBC disseminates and recovers proposal payloads:
+the proposer announces a session for `(height, view, payload_hash)`, peers
+exchange chunks, and `READY`/`DELIVER` signals track whether enough validators
+observed the same payload.
+
+With DA enabled, a peer considers the pending block payload available when
+either:
+
+- the local pending block bytes hash to the expected payload hash, or
+- RBC has recovered a payload matching the block hash, height, view, and
+ payload hash.
+
+If neither condition holds, the peer records `missing_local_data`, keeps trying
+to recover the payload through RBC or block sync, and reports the DA gate in
+status and telemetry. In the current implementation these DA signals are
+advisory for finality: a block still finalizes from the commit certificate plus
+the matching local payload, not from a separate DA quorum certificate.
+
+DA timing widens recovery windows. The effective DA quorum timeout is derived
+from the configured block and commit timings, then multiplied by
+`sumeragi.advanced.da.quorum_timeout_multiplier`. The availability timeout is
+`max(quorum_timeout, availability_timeout_floor_ms) * availability_timeout_multiplier`.
+Before that availability timeout expires, the node favors payload recovery and
+avoids premature rescheduling; after it expires, normal recovery and
+view-change paths can proceed.
+
+### Operator Notes
+
+Consensus profiles that enable DA add RBC-backed payload dissemination,
+manifest guards, DA bundle validation, and recovery telemetry. The peer
+template exposes `[sumeragi.da]` limits for commitments and proof openings per
+block, plus `[sumeragi.advanced.da]` timeout multipliers for quorum and
+availability behavior. Keep these settings consistent across validators in one
+network profile.
+
+For route discovery, start with the node's OpenAPI document:
+
+```bash
+curl -fsS "$TORII_URL/openapi.json" \
+ | jq '.paths | keys[] | select(startswith("/v1/da/"))'
+```
+
+Use the
+[query reference](/reference/queries.md#nexus-data-availability-and-packages)
+for the current DA query names, and the
+[peer configuration template](/reference/peer-config/) for the local
+`[sumeragi.da]` knobs exposed by your build.
+
+## SoraFS
+
+SoraFS is the decentralized content-addressed storage fabric. It packages
+bytes into deterministic chunks, CAR archives, and Norito manifests that
+bind content roots, chunking profiles, pin policies, and governance
+attestations. Storage providers advertise capacity and content
+availability, while gateways verify manifests and chunk commitments before
+serving content.
+
+Typical SoraFS uses include static application assets, documentation
+builds, zone bundles, model or artifact references, and governance evidence
+bundles. The Iroha data model exposes SoraFS gateway events and a
+[`FindSorafsProviderOwner`](/reference/queries.md#nexus-data-availability-and-packages)
+query for provider ownership resolution.
+
+### Pack, Manifest, Sign, and Submit
+
+```bash
+cargo run -p sorafs_car --features cli --bin sorafs_cli -- \
+ car pack \
+ --input ./dist \
+ --car-out artifacts/site.car \
+ --plan-out artifacts/site.chunk-plan.json \
+ --summary-out artifacts/site.car-summary.json
+
+cargo run -p sorafs_car --features cli --bin sorafs_cli -- \
+ manifest build \
+ --summary artifacts/site.car-summary.json \
+ --manifest-out artifacts/site.manifest.to \
+ --manifest-json-out artifacts/site.manifest.json \
+ --pin-min-replicas=3 \
+ --pin-storage-class=warm \
+ --pin-retention-epoch=42
+
+SIGSTORE_ID_TOKEN=$(oidc-client fetch-token) \
+cargo run -p sorafs_car --features cli --bin sorafs_cli -- \
+ manifest sign \
+ --manifest artifacts/site.manifest.to \
+ --bundle-out artifacts/site.manifest.bundle.json \
+ --signature-out artifacts/site.manifest.sig
+
+cargo run -p sorafs_car --features cli --bin sorafs_cli -- \
+ manifest submit \
+ --manifest artifacts/site.manifest.to \
+ --chunk-plan artifacts/site.chunk-plan.json \
+ --torii-url "$TORII_URL" \
+ --resolve-submitted-epoch=true \
+ --authority= \
+ --private-key-file ./secrets/authority.ed25519 \
+ --summary-out artifacts/site.manifest.submit.json \
+ --response-out artifacts/site.manifest.submit.body
+```
+
+If `/v1/sorafs/pin/register` is not routed on the target node, the CLI can
+fall back to a signed `/transaction` submission and wait for a terminal
+pipeline status.
+
+### Verify and Fetch
+
+```bash
+cargo run -p sorafs_car --features cli --bin sorafs_cli -- \
+ proof verify \
+ --manifest artifacts/site.manifest.to \
+ --car artifacts/site.car \
+ --chunk-plan artifacts/site.chunk-plan.json \
+ --summary-out artifacts/site.verify.json
+
+sorafs_cli fetch \
+ --plan artifacts/site.chunk-plan.json \
+ --manifest-id \
+ --provider name=primary,provider-id=,base-url=https://gateway.example.org/,stream-token="$(cat provider.token)" \
+ --output artifacts/site.fetch.tar \
+ --json-out artifacts/site.fetch.json
+```
+
+### Proof-of-Retrievability Checks
+
+Operators can inspect and trigger proof checks for storage providers:
+
+```bash
+sorafs_cli por status \
+ --torii-url "$TORII_URL" \
+ --manifest \
+ --status=failed \
+ --limit=20
+
+sorafs_cli por trigger \
+ --torii-url "$TORII_URL" \
+ --manifest \
+ --provider \
+ --reason=latency_probe \
+ --samples=48 \
+ --auth-token artifacts/challenge_token.to
+```
+
+## SoraDNS
+
+SoraDNS is the deterministic naming layer for SORA services and content. It
+normalizes names, anchors resolver directory updates in Iroha, and
+distributes signed zone or resolver bundles through SoraFS. Resolvers and
+gateways verify resolver attestation documents before trusting discovery
+metadata.
+
+For browser access, SoraDNS derives gateway hosts from a registered FQDN.
+The registered vanity host remains the canonical application origin, while
+gateway profiles can expose compatibility hosts for clients that cannot
+resolve SoraDNS names directly yet.
+
+### Host Forms
+
+| Form | Example | Purpose |
+| ---------------------- | ---------------------------------------------- | ------------------------------------------------------------------------------------------- |
+| Vanity origin | `https:///` | Canonical app URL recorded in manifests and release notes |
+| Taira browser gateway | `https://.mon.taira.sora.net/` | Public browser access when the alias is active and native SoraDNS resolution is unavailable |
+| Torii fallback path | `https://taira.sora.org/soradns//` | Transitional compatibility gateway when the alias is active |
+| Canonical hash gateway | `.gw.sora.id` | Deterministic gateway identity and GAR verification |
+
+The `/soradns//...` fallback is not the preferred public URL.
+Tooling, app manifests, and frontend configuration should prefer the vanity
+host itself. If an alias is not active on Taira, the browser gateway or
+fallback path can return `404` or fail TLS before application routing
+starts.
+
+### Derive Gateway Hosts
+
+```ts
+import {
+ deriveSoradnsGatewayHosts,
+ hostPatternsCoverDerivedHosts,
+} from '@iroha/iroha-js'
+
+const derived = deriveSoradnsGatewayHosts('docs.sora')
+console.log(derived.canonicalHost)
+console.log(derived.prettyHost)
+
+const taira = deriveSoradnsGatewayHosts('solswap-indexer.sora', {
+ prettySuffix: 'mon.taira.sora.net',
+})
+console.log(taira.prettyHost)
+
+const patterns = [
+ derived.canonicalHost,
+ derived.canonicalWildcard,
+ derived.prettyHost,
+]
+console.log(hostPatternsCoverDerivedHosts(patterns, derived))
+```
+
+GAR payloads should cover the canonical hash host, the canonical wildcard,
+and the selected pretty host.
+
+### Fetch a Resolver Directory Snapshot
+
+```bash
+curl -i "$TORII_URL/v1/soradns/directory/latest"
+
+soradns_resolver directory fetch \
+ --record-url "$TORII_URL/v1/soradns/directory/latest" \
+ --directory-url https://gateway.example.org/soradns/directory/latest.car \
+ --output ./state/soradns-directory
+
+soradns_resolver rad verify \
+ --rad ./state/soradns-directory/rad/resolver-a.norito
+```
+
+Gateways should reject resolvers whose resolver attestation document is
+missing, expired, unsigned, or not anchored in the latest directory Merkle
+root. On a network where no resolver directory has been published yet,
+`/v1/soradns/directory/latest` can return `404` even though the route is
+enabled.
+
+### Public DNS Delegation
+
+SoraDNS host derivation does not replace regular internet DNS delegation.
+If a public DNS name should point at a SoraDNS gateway:
+
+- for subdomains, publish a CNAME to the selected pretty host
+- for apex names, use ALIAS/ANAME or A/AAAA records to the gateway anycast
+ IPs
+- keep the canonical hash host under the SoraDNS gateway domain for GAR
+ checks
+
+## FHE and UAID
+
+Iroha exposes two FHE-related surfaces for Nexus services:
+
+- `iroha_crypto::fhe_bfv` implements deterministic BFV support for scalar
+ ciphertext evaluation. Identifier resolution uses
+ `BfvIdentifierPublicParameters` and `BfvIdentifierCiphertext`, where slot
+ 0 stores the input byte length and later slots store one encrypted byte
+ each.
+- Soracloud state and job schemas model FHE ciphertext workloads with
+ governance-managed parameter sets, execution policies, ciphertext
+ commitments, query envelopes, and disclosure requests.
+
+The BFV identifier path is used for privacy-preserving enrollment. A client
+can submit an encrypted identifier to the Torii resolver. The resolver
+evaluates it under the active identifier policy, derives an
+`OpaqueAccountId`, and emits a receipt. `ClaimIdentifier` then binds that
+receipt to the UAID attached to the target account.
+
+The UAID is the identity and capability anchor around that flow. In the
+data model, `UniversalAccountId` is hash-backed and displays as
+`uaid:`. Parsers accept either `uaid:` or the raw 64-hex
+digest. `Account` and `NewAccount` include optional `uaid` and `opaque_ids`
+fields. Runtime registration enforces a one-to-one UAID-to-account index,
+rejects duplicate or colliding opaque identifiers, and rejects opaque
+identifiers without a UAID. Whenever a UAID account binding changes, the
+runtime rebuilds Space Directory dataspace bindings for that UAID.
+
+Space Directory manifests attach capabilities to a UAID. An
+`AssetPermissionManifest` names the UAID, dataspace, activation and
+optional expiry epoch, and ordered allow/deny entries scoped by dataspace,
+program, method, asset, and AMX role. Evaluation is deny-wins: the first
+matching deny rejects the request, otherwise the latest matching allow
+candidate is checked against any amount limit. Publishing, expiring, and
+revoking these manifests is guarded by `CanPublishSpaceDirectoryManifest`.
+
+For Soracloud FHE state, the implemented schemas are:
+
+| Schema | What it controls |
+| ----------------------------------------- | --------------------------------------------------------------------------------------------------------------------------- |
+| `SoraStateBindingV1` with `FheCiphertext` | Declares that values under a state key prefix are FHE ciphertexts. |
+| `FheParamSetV1` | Names the scheme, backend, modulus chain, polynomial degree, slot count, security target, lifecycle, and parameter digest. |
+| `FheExecutionPolicyV1` | Bounds ciphertext size, plaintext size, input/output count, multiplication depth, rotations, bootstraps, and rounding mode. |
+| `FheGovernanceBundleV1` | Couples one parameter set with one execution policy for admission validation. |
+| `FheJobSpecV1` | Describes deterministic `Add`, `Multiply`, `RotateLeft`, or `Bootstrap` work over ciphertext state keys and commitments. |
+| `CiphertextQuerySpecV1` | Queries ciphertext-only state by service, binding, key prefix, result limit, metadata level, and optional inclusion proof. |
+| `DecryptionRequestV1` | Requests disclosure for one ciphertext commitment under a decryption-authority policy. |
+
+`FheJobSpecV1::validate_for_execution` checks that the job, execution
+policy, and parameter set agree before admission. It also enforces
+operation-specific rules: add and multiply need at least two inputs, rotate
+and bootstrap need exactly one input, and requested depth, rotation count,
+bootstrap count, input count, payload bytes, and deterministic output size
+must stay within policy bounds. Ciphertext query results must not return
+plaintext rows.
+
+UAID is not the ciphertext and not the FHE policy itself. It is the stable
+account capability anchor used to find the account, opaque identifier
+claims, and Space Directory bindings that authorize a service or dataspace
+flow. FHE schemas govern encrypted payload admission and execution
+separately through parameter sets, execution policies, ciphertext
+commitments, and decryption authority policies.
+
+Relevant Torii surfaces include:
+
+- `/v1/identifier-policies`
+- `/v1/identifiers/resolve`
+- `/v1/accounts/{account_id}/identifiers/claim-receipt`
+- `/v1/identifiers/receipts/{receipt_hash}`
+- `/v1/accounts/{uaid}/portfolio`
+- `/v1/space-directory/uaids/{uaid}`
+- `/v1/space-directory/uaids/{uaid}/manifests`
+- `/v1/soracloud/model/run-private`
+- `/v1/soracloud/model/run-private/finalize`
+- `/v1/soracloud/model/decrypt-output`
+
+The public metadata boundary is explicit in the schemas: UAID bindings,
+opaque identifier records, manifest lifecycle, state-key digests,
+ciphertext sizes, ciphertext commitments, policy names, parameter-set
+versions, job operations, output state keys, and disclosure request
+metadata can be visible. Identifier plaintexts, decrypted state, model
+inputs and outputs, and FHE secret keys are outside these public query
+records.
+
+## Operational Checklist
+
+- Confirm enabled service families with `/openapi` on the target Torii
+ node.
+- Treat Soracloud deployment manifests, SoraFS manifests, SoraDNS resolver
+ directory records, SoraNet relay directory records, and DA pin intents or
+ availability commitments as governance-sensitive artifacts.
+- Use the same SORA Nexus profile consistently across validators in one
+ network.
+- Keep Inrou root and shared lease volumes in manifests instead of relying
+ on ad hoc node-local paths.
+- Use SoraFS proof verification before promoting content aliases.
+- Monitor SoraNet handshake failures, DA quorum or availability timeouts,
+ SoraFS gateway refusals, SoraDNS RAD freshness, and Soracloud rollout
+ health.
+- For public Taira or Minamoto usage, start with
+ [Connect to SORA Nexus dataspaces](/get-started/sora-nexus-dataspaces.md).
+
+See also:
+
+- [Torii endpoints](/reference/torii-endpoints.md)
+- [Data event filters](/blockchain/filters.md#data-event-filters)
+- [Query reference](/reference/queries.md#nexus-data-availability-and-packages)
diff --git a/src/blockchain/transactions.md b/src/blockchain/transactions.md
new file mode 100644
index 000000000..04d83f3ed
--- /dev/null
+++ b/src/blockchain/transactions.md
@@ -0,0 +1,148 @@
+# Transactions
+
+A **transaction** is a signed request to execute work on the blockchain.
+The executable payload can be an ordered sequence of
+[instructions](./instructions.md), a contract call, IVM bytecode, or a
+proved IVM execution. See [Smart Contracts](./wasm.md) for the current
+contract execution model.
+
+All interactions in the blockchain are done via transactions.
+
+All transactions, including rejected transactions, are stored in blocks.
+
+For privacy-preserving asset movement, see
+[Anonymous Transactions](./anonymous-transactions.md). Anonymous
+transactions use shielded asset notes, commitments, nullifiers, and
+zero-knowledge proofs instead of public account-to-account balance changes.
+
+For proof evidence over selected transparent execution effects, see
+[FastPQ](./fastpq.md). FastPQ consumes execution witnesses after normal
+transaction execution and builds deterministic proof batches for supported
+state transitions.
+
+## Try It on Taira
+
+Use the explorer routes to inspect recent public Taira blocks and transaction
+statuses without a signing account:
+
+```bash
+curl -fsS 'https://taira.sora.org/v1/explorer/blocks?page=1&per_page=3' \
+ | jq '{pagination, blocks: [.items[] | {height, hash, transactions_total, transactions_rejected}]}'
+
+curl -fsS 'https://taira.sora.org/v1/explorer/transactions?page=1&per_page=5' \
+ | jq '{pagination, txs: [.items[] | {hash, block, status, executable}]}'
+```
+
+To follow a transaction your app submitted earlier, copy the `hash` from the
+list and inspect the explorer detail route:
+
+```bash
+TX_HASH=''
+
+curl -fsS "https://taira.sora.org/v1/explorer/transactions/$TX_HASH" \
+ | jq '{hash, block, status, authority, executable}'
+```
+
+This is still read-only. Submitting a transaction requires a signed Norito
+envelope, correct chain ID, fee metadata, and a faucet-funded Taira account.
+
+For fee-paying examples on Taira, save the faucet helper from
+[Get Testnet XOR on Taira](/get-started/sora-nexus-dataspaces.md#_4-get-testnet-xor-on-taira)
+as `taira_faucet_claim.py`, then fund the signer through the public faucet
+first:
+
+```bash
+export TAIRA_ACCOUNT_ID=''
+export TAIRA_FEE_ASSET=6TEAJqbb8oEPmLncoNiMRbLEK6tw
+
+curl -fsS https://taira.sora.org/v1/accounts/faucet/puzzle | jq .
+python3 taira_faucet_claim.py "$TAIRA_ACCOUNT_ID"
+
+iroha --config ./taira.client.toml ledger asset get \
+ --definition "$TAIRA_FEE_ASSET" \
+ --account "$TAIRA_ACCOUNT_ID"
+```
+
+If the faucet puzzle or claim route returns `502`, wait and retry before
+debugging the transaction itself.
+
+Then attach the Taira fee asset metadata when submitting the transaction:
+
+```bash
+printf '{"gas_asset_id":"%s"}\n' "$TAIRA_FEE_ASSET" > taira.tx-metadata.json
+
+iroha --config ./taira.client.toml \
+ --metadata ./taira.tx-metadata.json \
+ ledger transaction ping --msg "faucet-funded taira transaction"
+```
+
+## Offline Transactions
+
+Iroha has two offline transaction workflows:
+
+- **Offline signing** creates a normal signed transaction while the signing
+ device is disconnected. The transaction is not processed until an online
+ client submits the signed envelope to Torii, so it still needs the
+ correct chain ID, authority, permissions, fees, and transaction lifetime.
+- **Offline V2 notes** support offline value transfer through ledger-backed
+ bearer notes. Online transactions reserve value into escrow, later audit
+ or redeem offline payment tokens, and enforce replay protection when the
+ token reaches the ledger.
+
+Offline V2 is the maintained offline payment surface. Torii exposes
+`GET /v1/offline/v2/readiness` for feature discovery; legacy offline
+allowance, reserve, revocation, transfer-history, and cash HTTP routes are
+not published. Offline V2 note issuance, audit, and redemption are
+submitted as normal transaction instructions:
+
+Check the public Taira readiness flags:
+
+```bash
+curl -fsS https://taira.sora.org/v1/offline/v2/readiness \
+ | jq '{offline_note_v2, offline_one_use_keys, offline_recursive_note_proof, offline_sync_optional}'
+```
+
+| Instruction | Purpose |
+| --------------------- | ------------------------------------------------------------------------------------------------------------------- |
+| `IssueOfflineNoteV2` | Reserve an online asset amount into offline escrow and record a note commitment bound to a one-use key certificate. |
+| `AuditOfflineNoteV2` | Optionally record an offline payment token, its consumed nullifiers, output commitments, and recursive proof. |
+| `RedeemOfflineNoteV2` | Verify the final offline note proof, consume replay keys and nullifiers, and credit the recipient from escrow. |
+
+The typical flow is:
+
+1. Check Offline V2 readiness on the target Torii endpoint.
+2. Enable offline support for the asset and configure or derive its offline
+ escrow account.
+3. Register an active Offline V2 recursive verifier key and grant
+ `CanManageOfflineEscrow` to the account that issues notes.
+4. Submit `IssueOfflineNoteV2`. The ledger debits the note owner's asset,
+ credits escrow, records replay keys, and emits
+ `OfflineNoteEvent::NoteIssued`.
+5. Exchange the offline payment token outside the ledger. Wallets carry the
+ one-use key certificate, nullifiers, output commitments, and recursive
+ proof through their chosen transport, such as QR or a local hand-off.
+6. Submit `AuditOfflineNoteV2` when operators or wallets want an online
+ audit record before final redemption. Audit is optional for offline
+ finality.
+7. Submit `RedeemOfflineNoteV2` when the recipient comes online. Validators
+ check the verifier key, proof binding, issued claim, amount, recipient,
+ and nullifier uniqueness before crediting the recipient.
+
+Replay protection is enforced when audit or redemption reaches the ledger.
+Validators reject duplicate note issues, duplicate issued key certificates,
+duplicate nullifiers, already redeemed issued claims, and conflicting audit
+tokens. Until a token is audited or redeemed, the ledger cannot observe an
+offline conflict, so wallet and operator policies should limit value,
+expiry, accepted issuers, and reconciliation windows.
+
+Here is an example of creating a new transaction with the `Grant`
+instruction. In this transaction, Mouse is granting Alice the specified
+role (`role_id`). Check
+[the full example](./permissions.md#register-a-new-role).
+
+```rust
+let grant_role = Grant::account_role(role_id, alice_id);
+let grant_role_tx = TransactionBuilder::new(chain_id, mouse_id)
+ .with_instructions([grant_role])
+ .sign(mouse_private_key);
+```
diff --git a/src/blockchain/trigger-examples.md b/src/blockchain/trigger-examples.md
new file mode 100644
index 000000000..2776a1155
--- /dev/null
+++ b/src/blockchain/trigger-examples.md
@@ -0,0 +1,100 @@
+# Event Trigger Example
+
+This example describes the shape of an event trigger in the current Iroha
+model without depending on older `account@domain` or `asset#domain` literals.
+
+Suppose a network has:
+
+- a canonical account controlled by Alice's key
+- a canonical account controlled by the Mad Hatter's key
+- an asset definition projected as `tea` under `wonderland.universal`
+- a balance of that asset held by each account
+
+The goal is to register a trigger that observes Alice's tea balance and
+submits a transfer from the Mad Hatter account when the matching data event is
+emitted.
+
+## 1. Prepare accounts and assets
+
+Register the participating accounts and the asset definition first. In
+current Iroha, account IDs come from account controllers, while projected
+domains use `domain.dataspace` form:
+
+```text
+domain: wonderland.universal
+asset definition projection: tea in wonderland.universal
+holder accounts: AccountId(controller=alice_key), AccountId(controller=mad_hatter_key)
+```
+
+The asset definition still has a canonical opaque address. Store or query that
+address after registration and use it in the trigger action.
+
+## 2. Choose the trigger authority
+
+Set the trigger's technical account to a dedicated account when possible. A
+dedicated account makes it clear which permissions are required for trigger
+execution and avoids coupling the trigger to an operator's personal signing
+key.
+
+The technical account must already exist and must have permission to submit the
+instructions in the trigger executable.
+
+## 3. Define the executable
+
+The executable is the instruction sequence the trigger submits when the event
+filter matches. For this example, it contains one transfer:
+
+```text
+Transfer(
+ source = AssetId(tea_definition, mad_hatter_account),
+ value = Numeric(1),
+ destination = AssetId(tea_definition, alice_account)
+)
+```
+
+Use the SDK's current typed builders for the final transaction payload. Avoid
+hard-coding old textual IDs in trigger code; parse or query canonical IDs
+before building the executable.
+
+## 4. Define the event filter
+
+Use a data-event filter that narrows events to the object you care about:
+
+```text
+EventFilterBox::Data(
+ DataEventFilter for asset changes involving
+ AssetId(tea_definition, alice_account)
+)
+```
+
+Keep filters as specific as practical. An `AcceptAll` filter is useful for
+debugging, but it makes every matching event pay the cost of trigger
+evaluation.
+
+## 5. Register the trigger
+
+Register the trigger with:
+
+- a stable `TriggerId`
+- the executable instruction sequence
+- `Repeats::Indefinitely` or `Repeats::Exactly(n)`
+- the technical account
+- the event filter
+- optional metadata
+
+Trigger registration itself is a normal transaction, so the registering
+account needs permission to register triggers. The technical account needs the
+permissions required by the trigger executable.
+
+## Execution order
+
+When a block executes:
+
+1. Normal transaction instructions run first.
+2. Data events produced by those instructions are collected.
+3. Triggers whose filters match those events are scheduled.
+4. Trigger-produced effects are handled in the block execution pipeline without
+ allowing unbounded recursive trigger execution.
+
+If a trigger uses `Repeats::Exactly(n)`, register a new trigger when the count
+is exhausted and the same behavior is needed again.
diff --git a/src/blockchain/triggers.md b/src/blockchain/triggers.md
new file mode 100644
index 000000000..e3582aca0
--- /dev/null
+++ b/src/blockchain/triggers.md
@@ -0,0 +1,88 @@
+# Triggers
+
+Triggers bind an event filter to an executable action. When an event matches
+the trigger's filter, Iroha evaluates the trigger action as part of block
+execution.
+
+## Structure
+
+A registered `Trigger` contains:
+
+- `id`: a `TriggerId` wrapping a `Name`
+- `action`: the executable, authority, filter, repetition policy, retry policy,
+ and metadata
+
+The action contains:
+
+- `executable`: `Instructions`, `ContractCall`, `Ivm`, or `IvmProved`
+- `repeats`: `Indefinitely` or `Exactly(n)`
+- `authority`: the account that invokes the executable
+- `filter`: an `EventFilterBox`
+- `retry_policy`: optional retry behavior for scheduled time triggers
+- `metadata`: arbitrary trigger metadata
+
+## Event Filters
+
+Trigger conditions use the same event-filter model as subscriptions. The
+top-level event filter can match:
+
+- pipeline events
+- data events
+- time events
+- trigger execution events
+- trigger completion events
+
+Prefer the narrowest filter that matches the workflow. Broad filters are useful
+for diagnostics, but they increase work during block execution.
+
+See [Filters](/blockchain/filters.md) for the current filter families.
+
+## Time Triggers
+
+Time triggers use a time event filter. When the world state view reaches a
+matching time condition, Iroha executes the trigger action under the trigger
+authority. Time triggers are the trigger kind that can use the retry policy
+described below.
+
+## Repetition
+
+`Repeats::Indefinitely` keeps a trigger active until it is unregistered.
+
+`Repeats::Exactly(n)` allows the trigger to fire a fixed number of times. When
+the count is exhausted, register a new trigger if the same behavior is needed
+again.
+
+## Authority and Permissions
+
+The trigger authority is the account used to invoke the executable. Use a
+dedicated technical account for long-lived triggers so the required permissions
+are explicit and isolated from an operator's personal account.
+
+The authority needs the permissions required by the executable instructions or
+contract call. The account registering the trigger also needs permission to
+register triggers under the active runtime validator.
+
+## Retry Policy
+
+Time triggers can opt into a retry policy. A retry policy sets:
+
+- `max_retries`: how many retry attempts are allowed after an initial failed
+ firing
+- `retry_after_ms`: how long Iroha waits before a retry becomes eligible
+
+When the retry budget is exhausted, the trigger is unregistered.
+
+## Queries
+
+Use the current trigger queries to inspect trigger state:
+
+- [`FindTriggers`](/reference/queries.md#triggers-contracts-transactions-and-blocks)
+- [`FindActiveTriggerIds`](/reference/queries.md#triggers-contracts-transactions-and-blocks)
+- [`FindTriggerById`](/reference/queries.md#triggers-contracts-transactions-and-blocks)
+
+See also:
+
+- [Event trigger example](/blockchain/trigger-examples.md)
+- [Events](/blockchain/events.md)
+- [Instructions](/blockchain/instructions.md)
+- [Permissions](/blockchain/permissions.md)
diff --git a/src/blockchain/wasm.md b/src/blockchain/wasm.md
new file mode 100644
index 000000000..eb58ffe84
--- /dev/null
+++ b/src/blockchain/wasm.md
@@ -0,0 +1,73 @@
+# Smart Contracts
+
+Iroha transactions execute `Executable` payloads. The current data model
+supports:
+
+- `Executable::Instructions`: an ordered set of Iroha Special Instructions
+- `Executable::ContractCall`: a by-reference call to a deployed contract
+ instance
+- `Executable::Ivm`: Iroha VM bytecode
+- `Executable::IvmProved`: Iroha VM bytecode with a precomputed instruction
+ overlay and proof commitments
+
+Older Iroha 2-era contract examples used legacy helper crates and boxed
+query snippets. Current Iroha 3 contract work should target the Iroha VM
+and current SDK builders instead.
+
+## When To Use Smart Contracts
+
+Use normal instructions when the transaction can be expressed directly:
+
+- register or unregister objects
+- mint, burn, or transfer assets
+- update metadata
+- grant or revoke permissions
+- execute a trigger
+- set on-chain parameters
+
+Use a smart contract when the transaction needs packaged logic that is
+awkward to express as a static instruction sequence, or when a deployed
+contract instance should be called by reference.
+
+## IVM Executables
+
+`Executable::Ivm` carries raw IVM bytecode. Nodes execute that bytecode inside
+the runtime limits configured for the chain. Keep bytecode small and
+deterministic; contracts are part of transaction execution and therefore affect
+consensus.
+
+`Executable::IvmProved` is intended for proof-carrying flows. It carries:
+
+- IVM bytecode
+- a deterministic instruction overlay
+- an execution-events commitment
+- a gas-policy commitment
+
+The proof binds the overlay to the executed bytecode. Depending on pipeline
+policy, validators can verify the proof and replay execution as an additional
+safety check.
+
+## Deployed Contract Calls
+
+`Executable::ContractCall` invokes a deployed contract instance by address.
+Use this when contract code is registered separately and transactions should
+call it by reference instead of carrying the bytecode every time.
+
+## Operational Guidance
+
+- Keep contracts deterministic. Contract behavior must not depend on local
+ wall-clock time, host filesystem state, network calls, or other peer-local
+ inputs.
+- Keep payloads compact. Large bytecode increases transaction size and block
+ propagation cost.
+- Prefer typed instructions for simple ledger changes. They are easier to
+ audit and cheaper to execute.
+- Treat contract upgrade and registration permissions as high-risk
+ operational controls.
+
+See also:
+
+- [Instructions](/blockchain/instructions.md)
+- [Triggers](/blockchain/triggers.md)
+- [Permissions](/blockchain/permissions.md)
+- [Data model schema](/reference/data-model-schema.md)
diff --git a/src/blockchain/world.md b/src/blockchain/world.md
new file mode 100644
index 000000000..28fbd5295
--- /dev/null
+++ b/src/blockchain/world.md
@@ -0,0 +1,124 @@
+# World
+
+`World` is the global entity that contains other entities. The `World`
+consists of:
+
+- Iroha [configuration parameters](/guide/configure/client-configuration.md)
+- registered peers
+- registered domains
+- registered [triggers](/blockchain/triggers.md)
+- registered
+ [roles](/blockchain/permissions.md#permission-groups-roles)
+- registered
+ [permission token definitions](/blockchain/permissions.md#permission-tokens)
+- permission tokens for all accounts
+- [the chain of runtime validators](/blockchain/permissions.md#runtime-validators)
+
+When domains, peers, or roles are registered or unregistered, the `World`
+is the target of the (un)register
+[instruction](/blockchain/instructions.md).
+
+## World State View (WSV)
+
+World State View is the in-memory representation of the current blockchain
+state. It includes the `World`, committed block hashes, transaction indexes,
+and peers elected for the current epoch. Full block payloads are served from
+Kura rather than duplicated as mutable WSV data.
+
+The WSV is the state that queries read and that block execution mutates. It
+is not the durable source of truth by itself. Durable history is stored in
+[Kura](#kura-storage), and the WSV can be rebuilt from Kura blocks or loaded
+from a state snapshot and then caught up by replaying newer Kura blocks.
+
+### What the WSV Tracks
+
+The WSV is broader than the `World` object. In practice it contains:
+
+- the `World`: parameters, peers, domains, accounts, assets, NFTs, roles,
+ permissions, triggers, executor data, and other registered data-model
+ objects
+- committed block hashes and the latest committed height
+- transaction-to-block indexes used by queries and receipts
+- the current and previous commit topology used by consensus
+- in-memory indexes derived from committed blocks, such as data-availability
+ commitments, receipt cursors, pin intents, and query projection markers
+- runtime configuration snapshots needed for deterministic block execution,
+ such as cryptography, governance, pipeline, content, settlement, and Nexus
+ settings
+
+Queries normally receive a read-only `StateView` over these structures. A
+view is a consistent snapshot for query execution; it does not allow direct
+mutation of the WSV.
+
+### How the WSV Changes
+
+WSV changes are staged before they are committed. Block execution creates a
+block-scoped state overlay, and each accepted transaction applies its
+instructions in a transaction-scoped overlay. Data triggers invoked by those
+transactions run in the same block context. Time triggers are evaluated after
+transaction effects for the block.
+
+After consensus commits a block, the peer first enqueues the committed block
+in Kura. If this enqueue step fails, the WSV is not advanced and the
+consensus loop retries or requeues the block payload. When the block is
+accepted into Kura's queue, Iroha applies the post-execution block effects,
+updates derived indexes, and commits the staged WSV changes under a
+state-view lock. This keeps readers from observing a partially committed
+block.
+
+The consensus-critical rule is that peers must reach the same WSV from the
+same committed blocks. Direct local edits to WSV data bypass instructions and
+will make peers disagree during validation or replay.
+
+### Startup and Replay
+
+On startup, Iroha initializes Kura first and learns the stored block height.
+It then tries to load a state snapshot. If no snapshot is available, or if a
+snapshot is rejected as recoverable, Iroha creates an initial state and
+replays committed blocks from Kura. If a snapshot is valid but behind Kura,
+only the missing height range is replayed.
+
+Replay validates each stored block, reconstructs the commit roster for that
+height, applies the block effects to the WSV, and commits the resulting
+state. This means Kura is the recovery path for the WSV, while snapshots are
+an optimization that avoid replaying the whole chain.
+
+## Kura Storage
+
+_Kura_ is Iroha's persistent block storage. It stores signed blocks and
+recovery metadata. It does not store a second mutable copy of the WSV.
+
+Kura storage is rooted at [`kura.store_dir`](/reference/peer-config/params.md#param-kura-store-dir).
+Within that root, block data is split by lane or segment. The primary files
+for a segment are:
+
+| Path | Purpose |
+| --- | --- |
+| `blocks//blocks.data` | Contiguous Norito-framed signed block payloads. |
+| `blocks//blocks.index` | Fixed-size `(start, length)` entries that map block height to bytes in `blocks.data`. |
+| `blocks//blocks.hashes` | Block hashes by height for fast lookup and startup validation. |
+| `blocks//blocks.count.norito` | Durable commit marker recording how many block index entries are safe to use. |
+| `blocks//da_blocks/` | Evicted block payloads kept outside `blocks.data` when disk-budget enforcement moves old bodies out of the hot file. |
+| `blocks//pipeline/sidecars.norito` and `sidecars.index` | Pipeline recovery sidecars keyed by block height. |
+| `blocks//pipeline/roster_sidecars.norito` and `roster_sidecars.index` | Recent commit-roster sidecars used by block sync and replay. |
+| `merge_ledger/.log` | Merge-ledger entries aligned with committed blocks. |
+| `commit-rosters.norito` | Retained commit certificates and validator checkpoints for recent blocks. |
+
+Kura keeps a compact in-memory vector for the chain: each height has the
+block hash and, optionally, the block body. The genesis block remains cached,
+and the most recent [`kura.blocks_in_memory`](/reference/peer-config/params.md#param-kura-blocks-in-memory)
+non-genesis blocks keep their bodies in memory. Older block bodies are
+dropped from memory and reloaded from Kura files when needed.
+
+During initialization, `strict` mode validates stored blocks from the block
+payloads and rewrites the hash file if needed. `fast` mode starts from stored
+hash/index metadata and falls back to strict initialization if that metadata
+is inconsistent. If Kura detects a corrupt tail, it prunes storage to the
+last validated block.
+
+Kura writes new blocks through a background writer. The writer appends block
+payloads, hashes, and index entries, then advances the durable count marker
+according to the configured fsync policy. When disk-budget enforcement is
+active, Kura can purge retired segments or evict older block bodies into
+`da_blocks/` while keeping hashes and index entries available for validation
+and lookup.
diff --git a/src/documenting/snippets.md b/src/documenting/snippets.md
new file mode 100644
index 000000000..e67d97f05
--- /dev/null
+++ b/src/documenting/snippets.md
@@ -0,0 +1,130 @@
+# Code Snippets
+
+To make code snippets in the documentation more "real" and robust, it is
+better to fetch them directly from the source files. The sources are
+located in other repositories, where they are built, run, and tested.
+
+## How it works
+
+### Snippet Sources
+
+Snippet sources are defined in
+[`snippet_sources.ts`](https://github.com/hyperledger-iroha/iroha-2-docs/blob/main/etc/snippet-sources.ts).
+The `snippet_sources.ts` file is located in the documentation repository.
+By default, Iroha snippets are loaded from pinned raw GitHub sources so CI
+and preview builds do not depend on a local repository layout. Override
+`IROHA_REV` or `IROHA_RAW_BASE` to point snippets at another published
+revision. Set `IROHA_SOURCE_DIR` when the data-model schema snapshot is
+empty and you want to regenerate that page from a local Iroha source
+checkout.
+
+It has the following format:
+
+```ts
+import { IROHA_RAW_BASE } from './meta'
+
+function irohaRawSource(...segments: string[]): string {
+ return `${IROHA_RAW_BASE}/${segments.join('/')}`
+}
+
+export default [
+ {
+ src: irohaRawSource('defaults/client.toml'),
+ },
+ {
+ src: './src/example_code/lorem.rs',
+ },
+]
+```
+
+- `src` defines the source file location and could be either an HTTP(s) URI
+ or a relative file path.
+- `filename` (optional) explicitly defines the local filename.
+- `transform` (optional) can derive a snippet from generated source data.
+ The data-model reference uses this to render the current schema.
+
+### Fetching Snippets
+
+Code snippets are fetched from the locations specified in
+`snippet_sources.ts` and written into the `/src/snippets` directory. There
+are two ways to fetch the snippets:
+
+- Automatically after dependencies were installed (i.e. `pnpm install`)
+- Manually by calling `pnpm get-snippets`
+
+::: tip
+
+By default, snippets are deleted and reloaded each time `pnpm get-snippets`
+is called. For local development it might be more convenient to enable
+"lazy" behavior by calling `pnpm get-snippets --force false`.
+
+:::
+
+### Using Snippets in Markdown
+
+Use
+[Code Snippets feature in VitePress](https://vitepress.vuejs.org/guide/markdown#import-code-snippets)
+to include snippets into documentation:
+
+**Input**
+
+```md
+<<<@/example_code/lorem.rs
+
+<<<@/example_code/lorem.rs#ipsum
+```
+
+**Output**
+
+<<<@/example_code/lorem.rs
+
+<<<@/example_code/lorem.rs#ipsum
+
+Note that we included only the `#ipsum` code region, not the entire file.
+This feature is essential when it comes to including code from real source
+files into the documentation.
+
+## Example
+
+Let's add a code snippet from Iroha JavaScript SDK. For example, this one:
+[`/packages/docs-recipes/src/1.client-install.ts`](https://github.com/hyperledger-iroha/iroha-javascript/blob/e300886e76c777776efad1e2f5cb245bfb8ed02e/packages/docs-recipes/src/1.client-install.ts).
+
+1. First, get a permalink to the file. Open the file on GitHub and click
+ `Raw` button to get the link. For example:
+ https://raw.githubusercontent.com/hyperledger-iroha/iroha-javascript/e300886e76c777776efad1e2f5cb245bfb8ed02e/packages/docs-recipes/src/1.client-install.ts
+
+2. Define the new snippet in the [Snippet Sources](#snippet-sources):
+
+ ```ts
+ export default [
+ /// ...
+
+ {
+ src: 'https://raw.githubusercontent.com/hyperledger-iroha/iroha-javascript/e300886e76c777776efad1e2f5cb245bfb8ed02e/packages/docs-recipes/src/1.client-install.ts',
+ filename: 'js-sdk-1-client-install.ts',
+ },
+ ]
+ ```
+
+ ::: tip
+
+ Since `snippet_sources.ts` is a TypeScript file, it can use small helper
+ functions. Keep those helpers focused: snippets should continue to
+ reflect built and tested source files, not hand-written copies.
+
+ :::
+
+3. [Include](#using-snippets-in-markdown) the snippet in any Markdown file
+ in the documentation as follows:
+
+ **Input**
+
+ ```md
+ <<<@/snippets/js-sdk-1-client-install.ts
+ ```
+
+ **Output**
+
+ ```ts
+ // Example snippet content fetched into src/snippets/js-sdk-1-client-install.ts
+ ```
diff --git a/src/example_code/lorem.rs b/src/example_code/lorem.rs
new file mode 100644
index 000000000..2875fed2c
--- /dev/null
+++ b/src/example_code/lorem.rs
@@ -0,0 +1,5 @@
+fn main() {
+ // #region ipsum
+ println!("Lorem ipsum");
+ // #endregion ipsum
+}
diff --git a/src/get-started/index.md b/src/get-started/index.md
new file mode 100644
index 000000000..7505f7dcf
--- /dev/null
+++ b/src/get-started/index.md
@@ -0,0 +1,55 @@
+# Iroha 3
+
+Iroha 3 is the Nexus-oriented deployment track shipped from the main
+Hyperledger Iroha workspace. It keeps the same core building blocks as
+Iroha 2 while adding the Nexus model for data spaces, multi-lane execution,
+and SORA-specific deployment profiles.
+
+At a high level, Iroha 3 combines:
+
+- deterministic execution and storage
+- the Iroha Virtual Machine (IVM) for portable smart contracts
+- Norito as the canonical wire format
+- Torii for client, operator, and app-facing APIs
+- Sumeragi consensus with operator telemetry and status endpoints
+
+## Quickstart
+
+If you are starting from scratch, follow these pages in order:
+
+1. [Install Iroha 3](/get-started/install-iroha-2.md)
+2. [Launch Iroha 3](/get-started/launch-iroha-2.md)
+3. [Operate Iroha 3 via CLI](/get-started/operate-iroha-2-via-cli.md)
+4. [Connect to SORA Nexus dataspaces](/get-started/sora-nexus-dataspaces.md)
+5. [Sponsor private dataspace fees](/get-started/private-dataspace-fee-sponsor.md)
+
+If you are migrating an existing deployment or mental model, read
+[Iroha 3 vs. Iroha 2](/get-started/iroha-2.md) first.
+
+## SDKs
+
+The current SDK entry points documented in this site are:
+
+- [Rust](/guide/tutorials/rust.md)
+- [Python](/guide/tutorials/python.md)
+- [JavaScript / TypeScript](/guide/tutorials/javascript.md)
+- [Android, Kotlin, and Java](/guide/tutorials/kotlin-java.md)
+- [Swift and iOS](/guide/tutorials/swift.md)
+
+## Operator References
+
+The pages you will use most often while running a network are:
+
+- [Working with Iroha binaries](/reference/binaries.md)
+- [Genesis reference](/reference/genesis.md)
+- [Torii endpoints](/reference/torii-endpoints.md)
+- [Connect to SORA Nexus dataspaces](/get-started/sora-nexus-dataspaces.md)
+- [Sponsor private dataspace fees](/get-started/private-dataspace-fee-sponsor.md)
+- [Compatibility matrix](/reference/compatibility-matrix.md)
+
+## Learn More
+
+- [Iroha `i23-features` branch](https://github.com/hyperledger-iroha/iroha/tree/i23-features)
+- [Workspace docs index](https://github.com/hyperledger-iroha/iroha/blob/i23-features/docs/README.md)
+- [Iroha 3 whitepaper](https://github.com/hyperledger-iroha/iroha/blob/i23-features/docs/source/iroha_3_whitepaper.md)
+- [Iroha 2 whitepaper](https://github.com/hyperledger-iroha/iroha/blob/i23-features/docs/source/iroha_2_whitepaper.md)
diff --git a/src/get-started/install-iroha-2.md b/src/get-started/install-iroha-2.md
new file mode 100644
index 000000000..64219ab3b
--- /dev/null
+++ b/src/get-started/install-iroha-2.md
@@ -0,0 +1,61 @@
+# Install Iroha 3
+
+This page covers the current installation workflow for the Iroha 3 toolchain
+and binaries using the upstream `hyperledger-iroha/iroha` workspace.
+
+## 1. Prerequisites
+
+Install these first:
+
+- [rustup](https://www.rust-lang.org/tools/install), so the pinned
+ `rust-toolchain.toml` toolchain (`1.93.1`) is installed automatically
+- `git`
+- optionally, Docker and Docker Compose for the local multi-peer quickstart
+
+## 2. Clone the Workspace
+
+```bash
+git clone --branch i23-features https://github.com/hyperledger-iroha/iroha.git
+cd iroha
+```
+
+## 3. Build the Workspace
+
+Build everything:
+
+```bash
+cargo build --workspace
+```
+
+For a smaller operator-focused build, compile just the main binaries:
+
+```bash
+cargo build --release -p irohad -p iroha_cli -p iroha_kagami
+```
+
+The resulting binaries are written to `target/debug/` or `target/release/`.
+
+## 4. Verify the Installed Tools
+
+```bash
+cargo run --bin irohad -- --help
+cargo run --bin iroha -- --help
+cargo run --bin kagami -- --help
+```
+
+The three binaries you will usually use are:
+
+- `irohad` for the peer daemon
+- `iroha` for CLI access to Torii and operator endpoints
+- `kagami` for keys, genesis manifests, and localnet profiles
+
+## 5. Optional Localnet and Docker Path
+
+The current source-backed localnet flow is generated by Kagami. It writes peer
+configs, genesis artifacts, client config, helper scripts, and an optional
+Compose file that matches the checked-out code:
+
+- `kagami localnet` for native local peer scripts
+- `kagami docker` for Docker Compose generated from a localnet directory
+
+Continue with [Launch Iroha 3](/get-started/launch-iroha-2.md).
diff --git a/src/get-started/iroha-2.md b/src/get-started/iroha-2.md
new file mode 100644
index 000000000..e610a4ee0
--- /dev/null
+++ b/src/get-started/iroha-2.md
@@ -0,0 +1,49 @@
+# Iroha 3 vs. Iroha 2
+
+The current workspace ships two deployment tracks from the same codebase:
+
+- **Iroha 2** for self-hosted permissioned and consortium networks
+- **Iroha 3 / SORA Nexus** for the Nexus-oriented global deployment track
+
+Iroha 3 is not a separate rewrite. It reuses the same core crates, the same
+Iroha Virtual Machine, the same Torii transport layer, and the same Norito
+codec.
+
+## What Stays the Same
+
+- `irohad`, `iroha`, and `kagami` remain the primary operator tools
+- smart contracts still target IVM
+- Torii remains the main API surface
+- Sumeragi remains the consensus engine
+- Norito remains the canonical wire format
+
+## What Changes in Iroha 3
+
+| Area | Iroha 2 | Iroha 3 |
+| --- | --- | --- |
+| Deployment model | Standalone self-hosted networks | Nexus-oriented deployment track with SORA-specific profiles |
+| Execution layout | Single-lane worldview | Multi-lane execution with Nexus data spaces |
+| Routing | One network surface | Lane and data-space aware routing policies |
+| Consensus profile | Permissioned by default | Permissioned or NPoS depending on the dataspace and profile |
+| Operator config | Generic defaults | Additional Nexus, SoraFS, streaming, and lane catalog settings |
+| Genesis workflow | Standard manifest and signed block | Same workflow, plus explicit `consensus_mode`, topology PoPs, and Nexus-oriented profiles |
+
+## Migration Guidance
+
+If you already know Iroha 2, the main operational differences in Iroha 3 are:
+
+- you should expect more configuration around Nexus lanes and data spaces
+- public SORA Nexus deployments use the `--sora` profile path
+- genesis manifests now carry more explicit consensus and crypto metadata
+- telemetry and Torii reference pages matter more during rollout because
+ operator status endpoints are part of the day-to-day workflow
+
+The shared codebase means many concepts stay familiar, but the deployment shape
+changes from "run your own isolated network" to "run the Nexus-oriented track
+with lane-aware configuration and tooling."
+
+## Read Next
+
+- [Launch Iroha 3](/get-started/launch-iroha-2.md)
+- [Genesis reference](/reference/genesis.md)
+- [Iroha 3 architecture overview](/blockchain/iroha-explained.md)
diff --git a/src/get-started/launch-iroha-2.md b/src/get-started/launch-iroha-2.md
new file mode 100644
index 000000000..ec0b2d613
--- /dev/null
+++ b/src/get-started/launch-iroha-2.md
@@ -0,0 +1,90 @@
+# Launch Iroha 3
+
+This page walks through the current local-network flow for Iroha 3 using the
+default workspace assets from the upstream repository.
+
+## 1. Generate a Local Multi-Peer Network
+
+Generate a four-peer localnet from the current Kagami code:
+
+```bash
+cargo run --bin kagami -- localnet --build-line iroha3 --peers 4 --out-dir ./localnet
+```
+
+The output directory contains matching peer configs, `genesis.json`,
+`genesis.signed.nrt`, `client.toml`, and helper scripts.
+
+For a native local smoke test, start the generated peers directly:
+
+```bash
+./localnet/start.sh
+```
+
+For a containerized run, generate Compose from the same localnet directory:
+
+```bash
+cargo run --bin kagami -- docker \
+ --peers 4 \
+ --config-dir ./localnet \
+ --image hyperledger/iroha:dev \
+ --out-file ./localnet/docker-compose.yml \
+ --force
+
+docker compose -f ./localnet/docker-compose.yml up
+```
+
+The default generated stack exposes:
+
+- peer P2P ports `1337` to `1340`
+- Torii HTTP ports `8080` to `8083`
+- a ready-made client config at `./localnet/client.toml`
+
+## 2. Verify That the Network Is Up
+
+Check the status endpoint on the first peer:
+
+```bash
+curl http://127.0.0.1:8080/status
+```
+
+The default health checks also use:
+
+```bash
+curl http://127.0.0.1:8080/status/blocks
+```
+
+You can immediately point the CLI at the bundled client config:
+
+```bash
+cargo run --bin iroha -- --config ./localnet/client.toml ledger domain list all
+```
+
+## 3. Nexus Profile
+
+The repository also ships a SORA Nexus-oriented config profile under
+`defaults/nexus/`.
+
+To run a native peer with the Nexus profile:
+
+```bash
+./target/release/irohad --sora --config ./defaults/nexus/config.toml
+```
+
+Use `defaults/nexus/client.toml` for CLI access to that profile.
+
+## 4. Stop the Local Network
+
+For a native generated localnet:
+
+```bash
+./localnet/stop.sh
+```
+
+For the generated Compose stack:
+
+```bash
+docker compose -f ./localnet/docker-compose.yml down
+```
+
+After the network is running, continue with
+[Operate Iroha 3 via CLI](/get-started/operate-iroha-2-via-cli.md).
diff --git a/src/get-started/operate-iroha-2-via-cli.md b/src/get-started/operate-iroha-2-via-cli.md
new file mode 100644
index 000000000..fb4c05990
--- /dev/null
+++ b/src/get-started/operate-iroha-2-via-cli.md
@@ -0,0 +1,180 @@
+# Operate Iroha 3 via CLI
+
+The `iroha` binary is the shared CLI for the current Iroha 2 and Iroha 3
+codebase. The same source tree also exposes `iroha2` and `iroha3` aliases for
+track-specific scripting, while `iroha` remains the stable command used in
+these examples.
+
+## 1. Prerequisites
+
+Start a local network first:
+
+- [Launch Iroha 3](./launch-iroha-2.md)
+
+The examples below assume the generated client configuration from the localnet
+created in [Launch Iroha 3](./launch-iroha-2.md):
+
+```bash
+./localnet/client.toml
+```
+
+## 2. Basic CLI Setup
+
+Show the top-level help:
+
+```bash
+cargo run --bin iroha -- --config ./localnet/client.toml --help
+```
+
+The CLI is organized into these top-level command groups:
+
+- `account` for account-oriented shortcuts
+- `tx` for transaction-level helpers
+- `ledger` for on-ledger reads and writes
+- `ops` for operator diagnostics
+- `app` for app API helpers
+- `contract` for contract deployment and calls
+- `tools` for diagnostics and developer utilities
+- `taira` for Taira and Nexus-oriented workflows
+
+The `ledger` group also contains domain-specific transaction helpers such as
+`ledger transaction`.
+
+Use `--output-format text` for human-readable operator output and `--machine`
+for strict automation mode.
+
+## 3. Try the Public Taira Testnet
+
+You can try read-only Taira checks before running a local peer or creating a
+signer. These commands use public Torii JSON routes and do not spend testnet
+XOR.
+
+Check Taira health:
+
+```bash
+curl -fsS https://taira.sora.org/status \
+ | jq '{blocks, txs_approved, txs_rejected, queue_size, peers}'
+```
+
+List public domains in the `universal` dataspace:
+
+```bash
+curl -fsS 'https://taira.sora.org/v1/domains?limit=10' \
+ | jq -r '.items[].id'
+```
+
+List a few asset definitions and their current supply:
+
+```bash
+curl -fsS 'https://taira.sora.org/v1/assets/definitions?limit=10' \
+ | jq -r '.items[] | [.id, .name, .mintable, .total_quantity] | @tsv'
+```
+
+If you have the current `iroha` binary, run the Taira diagnostics helper:
+
+```bash
+iroha taira doctor --public-root https://taira.sora.org --json
+```
+
+Create `taira.client.toml` only when you are ready to test signed commands.
+See [Connect to SORA Nexus Dataspaces](/get-started/sora-nexus-dataspaces.md)
+for the config, faucet, and canary flow. Do not run write commands against
+Taira until the account is funded with the faucet fee asset.
+
+For any fee-paying Taira CLI example, save the faucet helper from
+[Get Testnet XOR on Taira](/get-started/sora-nexus-dataspaces.md#_4-get-testnet-xor-on-taira)
+as `taira_faucet_claim.py`, then claim testnet XOR first:
+
+```bash
+export TAIRA_ACCOUNT_ID=''
+export TAIRA_FEE_ASSET=6TEAJqbb8oEPmLncoNiMRbLEK6tw
+
+curl -fsS https://taira.sora.org/v1/accounts/faucet/puzzle | jq .
+python3 taira_faucet_claim.py "$TAIRA_ACCOUNT_ID"
+
+iroha --config ./taira.client.toml ledger asset get \
+ --definition "$TAIRA_FEE_ASSET" \
+ --account "$TAIRA_ACCOUNT_ID"
+```
+
+If the faucet puzzle or claim route returns `502`, wait and retry. That is a
+public testnet availability issue, not a signal to regenerate the account keys.
+
+After the balance is visible, attach the fee asset metadata to writes:
+
+```bash
+printf '{"gas_asset_id":"%s"}\n' "$TAIRA_FEE_ASSET" > taira.tx-metadata.json
+
+iroha --config ./taira.client.toml \
+ --metadata ./taira.tx-metadata.json \
+ ledger transaction ping --msg "hello from faucet-funded taira"
+```
+
+## 4. Basic Ledger Commands
+
+List all domains:
+
+```bash
+cargo run --bin iroha -- --config ./localnet/client.toml ledger domain list all
+```
+
+Register a domain. Current Iroha IDs are dataspace-qualified, so use a domain
+such as `docs.universal` rather than a bare `docs` literal:
+
+```bash
+cargo run --bin iroha -- --config ./localnet/client.toml ledger domain register --id docs.universal
+```
+
+Send a simple ping transaction:
+
+```bash
+cargo run --bin iroha -- --config ./localnet/client.toml ledger transaction ping --msg "hello from iroha"
+```
+
+Read a recent block or subscribe to block events:
+
+```bash
+cargo run --bin iroha -- --config ./localnet/client.toml ledger blocks 1 --timeout 30s
+cargo run --bin iroha -- --config ./localnet/client.toml ledger events block
+```
+
+## 5. Operator Commands
+
+Consensus status:
+
+```bash
+cargo run --bin iroha -- --config ./localnet/client.toml --output-format text ops sumeragi status
+```
+
+Per-phase latency snapshot:
+
+```bash
+cargo run --bin iroha -- --config ./localnet/client.toml --output-format text ops sumeragi phases
+```
+
+RBC throughput and active sessions:
+
+```bash
+cargo run --bin iroha -- --config ./localnet/client.toml --output-format text ops sumeragi rbc status
+cargo run --bin iroha -- --config ./localnet/client.toml --output-format text ops sumeragi rbc sessions
+```
+
+Collector plan and on-chain consensus parameters:
+
+```bash
+cargo run --bin iroha -- --config ./localnet/client.toml ops sumeragi collectors
+cargo run --bin iroha -- --config ./localnet/client.toml ops sumeragi params
+```
+
+## 6. Where to Go Next
+
+- [SDK tutorials](/guide/tutorials/)
+- [Torii endpoints](/reference/torii-endpoints.md)
+- [Working with Iroha binaries](/reference/binaries.md)
+- [CLI README](https://github.com/hyperledger-iroha/iroha/blob/i23-features/crates/iroha_cli/README.md)
+
+To regenerate a full Markdown help snapshot from the source checkout, run:
+
+```bash
+cargo run -p iroha_cli --bin iroha -- tools markdown-help > crates/iroha_cli/CommandLineHelp.md
+```
diff --git a/src/get-started/private-dataspace-fee-sponsor.md b/src/get-started/private-dataspace-fee-sponsor.md
new file mode 100644
index 000000000..e5cbdf519
--- /dev/null
+++ b/src/get-started/private-dataspace-fee-sponsor.md
@@ -0,0 +1,554 @@
+# Sponsor Fees for a Private Dataspace
+
+Fee sponsorship lets users submit private-dataspace transactions without
+holding XOR. The user still signs the transaction. The transaction metadata
+points at a sponsor account, and the runtime debits the sponsor's XOR balance
+for the network fee.
+
+The integration has three moving parts:
+
+1. the node allows fee sponsorship
+2. the sponsor account exists and has XOR
+3. each user has `CanUseFeeSponsor` for that sponsor
+
+After that, every sponsored user transaction only needs this metadata:
+
+```json
+{
+ "fee_sponsor": ""
+}
+```
+
+This page shows two common patterns:
+
+- **Free user writes**: the sponsor pays XOR and the user pays nothing.
+- **Local-token fees**: the user pays the sponsor in an app token, and the
+ sponsor pays the network in XOR.
+
+Use Taira or a private test network first. A new private dataspace is an
+operator and governance change; it is not created by client configuration.
+
+## Example Values
+
+The commands below use these placeholders:
+
+```bash
+export DATASPACE="team"
+export USER=""
+export SPONSOR=""
+export TREASURY=""
+export XOR_ASSET="xor#universal"
+export BILLING_DOMAIN="billing.team"
+export LOCAL_FEE_ASSET="usage#billing.team"
+export LOCAL_FEE_ASSET_ID=""
+export USER_ALIAS="alice@team"
+export PHONE_POLICY="phone#team"
+export EMAIL_POLICY="email#team"
+export POLICY_OWNER=""
+```
+
+Use canonical I105 account IDs unless your deployment has active account
+aliases for the same accounts.
+
+## 1. Prepare the Dataspace
+
+Start from the private dataspace catalog and routing work described in
+[Connect to SORA Nexus Dataspaces](/get-started/sora-nexus-dataspaces.md#_8-provision-a-new-dataspace).
+An operator-facing fragment looks like this:
+
+```toml
+[[nexus.lane_catalog]]
+index = 5
+alias = "team-private"
+description = "Private team lane"
+dataspace = "team"
+visibility = "private"
+metadata = {}
+
+[[nexus.dataspace_catalog]]
+alias = "team"
+id = 42
+description = "Private team dataspace"
+fault_tolerance = 1
+
+[[nexus.routing_policy.rules]]
+lane = 5
+dataspace = "team"
+[nexus.routing_policy.rules.matcher]
+account_prefix = "team."
+description = "Route team domains to the private dataspace"
+```
+
+Before moving to user transactions, check that:
+
+- the private lane appears in the node `/status` response
+- user accounts are admitted by your private onboarding flow
+- the sponsor account exists
+- the XOR fee asset and fee sink account are valid on the network
+
+## 2. Register Assets in the Dataspace
+
+Register the asset definitions that users will hold inside the private
+dataspace before you wire them into application logic. For the local-token fee
+pattern, the tutorial uses `usage#billing.team`:
+
+```text
+#.
+usage#billing.team
+```
+
+First register the domain that owns the asset namespace:
+
+```bash
+iroha --config ./operator.client.toml \
+ ledger domain register --id "$BILLING_DOMAIN"
+```
+
+Then register the asset definition. The canonical `--id` is the network-level
+asset definition ID. The alias is what developers and end users should use in
+dataspace code:
+
+```bash
+iroha --config ./operator.client.toml \
+ ledger asset definition register \
+ --id "$LOCAL_FEE_ASSET_ID" \
+ --name usage \
+ --alias "$LOCAL_FEE_ASSET" \
+ --scale 0
+```
+
+Mint or transfer the local token to a user during onboarding:
+
+```bash
+iroha --config ./operator.client.toml \
+ ledger asset mint \
+ --definition-alias "$LOCAL_FEE_ASSET" \
+ --account "$USER" \
+ --quantity 100
+```
+
+Check the user's balance:
+
+```bash
+iroha --config ./operator.client.toml \
+ ledger asset get \
+ --definition-alias "$LOCAL_FEE_ASSET" \
+ --account "$USER"
+```
+
+Use the same pattern for application assets in the dataspace. Register one
+asset definition per token, give each one a dataspace alias, and refer to the
+alias from SDK code instead of hard-coding canonical asset definition IDs.
+
+## 3. Register User Aliases
+
+Accounts are still canonical I105 account IDs. User-facing names are account
+aliases, and aliases should be non-sensitive handles such as `alice@team` or
+`alice@members.team`. Do not use phone numbers or email addresses as aliases.
+Those belong in the private identifier flow in the next section.
+
+Alias registration is an instruction flow:
+
+```text
+AcquireAccountAliasLease(
+ alias = "$USER_ALIAS",
+ owner = "$USER",
+ payer = "$USER",
+ term_years = 1,
+ pricing_class_hint = null
+)
+
+SetPrimaryAccountAlias(
+ account = "$USER",
+ alias = "$USER_ALIAS",
+ lease_expiry_ms = null
+)
+```
+
+The current CLI exposes alias lookup helpers, but not a typed helper for
+creating leases and bindings. Generate the `AcquireAccountAliasLease` and
+`SetPrimaryAccountAlias` instructions with your SDK or onboarding service and
+submit them as one transaction. If the user should not pay XOR, submit the
+transaction with the same `fee_sponsor` metadata used later in this tutorial.
+
+After the alias is bound, verify it from the CLI:
+
+```bash
+iroha --config ./operator.client.toml \
+ app alias resolve --alias "$USER_ALIAS"
+
+iroha --config ./operator.client.toml \
+ app alias by-account \
+ --account-id "$USER" \
+ --dataspace "$DATASPACE"
+```
+
+For new account creation, prefer an onboarding service that builds
+`NewAccount` with a stable `uaid` and, if needed, an initial `label`. The
+simple `ledger account register --id` command only registers the canonical
+account ID.
+
+## 4. Register Phone and Email Privately with FHE
+
+Use phone numbers and email addresses as private identifier claims, not public
+aliases. The FHE-backed flow keeps raw identifiers out of account aliases,
+transaction metadata, and world state:
+
+1. the operator registers a RAM-LFE/FHE program policy for phone and email
+2. the operator registers active identifier policies such as `phone#team` and
+ `email#team`
+3. the wallet normalizes the phone or email locally
+4. the wallet sends the encrypted value to the resolver
+5. the resolver returns an `IdentifierResolutionReceipt`
+6. the user submits `ClaimIdentifier` with the receipt
+7. the chain stores an opaque identifier and receipt hash, not the raw phone or
+ email value
+
+The operator-side policy setup is an SDK or service task. Build and submit
+these instruction pairs for each identifier type:
+
+```text
+RegisterRamLfeProgramPolicy(
+ program_id = "phone_team",
+ owner = "$POLICY_OWNER",
+ backend = "bfv-programmed-sha3-256-v1",
+ verification_mode = "signed",
+ commitment = "",
+ resolver_public_key = ""
+)
+ActivateRamLfeProgramPolicy(program_id = "phone_team")
+
+RegisterIdentifierPolicy(
+ id = "$PHONE_POLICY",
+ owner = "$POLICY_OWNER",
+ normalization = "PhoneE164",
+ program_id = "phone_team",
+ note = "Private phone registration for team dataspace"
+)
+ActivateIdentifierPolicy(policy_id = "$PHONE_POLICY")
+```
+
+Repeat it for email with:
+
+```text
+program_id = "email_team"
+policy_id = "$EMAIL_POLICY"
+normalization = "EmailAddress"
+```
+
+During onboarding, the wallet or backend should normalize locally:
+
+```text
+PhoneE164: "+15551234567"
+EmailAddress: "alice@example.com"
+```
+
+After the sponsor metadata file is created in step 8, submit a user-signed
+claim instruction with that metadata:
+
+```text
+ClaimIdentifier(
+ account = "$USER",
+ receipt = IdentifierResolutionReceipt {
+ payload: {
+ policy_id: "$PHONE_POLICY",
+ opaque_id: "",
+ uaid: "",
+ account_id: "$USER",
+ ...
+ },
+ attestation: ""
+ }
+)
+```
+
+The current CLI does not expose typed commands for these identity
+instructions. Generate serialized `InstructionBox` values with the SDK and
+submit them through `ledger transaction stdin`:
+
+```bash
+printf '[""]\n' |
+ iroha --config ./alice.client.toml \
+ --metadata ./sponsored-fee.json \
+ ledger transaction stdin
+```
+
+Keep these guardrails in the onboarding service:
+
+- account aliases are human-readable handles only
+- raw phone and email values never appear in aliases, metadata, logs, or
+ transaction payloads
+- the account has a `uaid` before it claims private identifiers
+- receipts bind `policy_id`, `opaque_id`, `uaid`, `account_id`, and expiry
+- resolver keys and hidden-program commitments are controlled by governance
+
+## 5. Enable Sponsorship on the Node
+
+Fee sponsorship is a node/runtime policy. Enable it in the Nexus fee config:
+
+```toml
+[nexus.fees]
+fee_asset_id = "xor#universal"
+fee_sink_account_id = ""
+base_fee = "0"
+per_byte_fee = "0"
+per_instruction_fee = "0.001"
+per_gas_unit_fee = "0.00005"
+sponsorship_enabled = true
+sponsor_max_fee = "0"
+```
+
+`fee_asset_id` is the network fee asset. For SORA Nexus this is XOR. Use the
+active XOR alias or canonical XOR asset definition ID exposed by your network.
+
+`sponsor_max_fee = "0"` means there is no per-transaction sponsor cap. For
+production, set a non-zero cap after you know the normal size and gas profile
+of your dataspace transactions.
+
+Restart or roll this config through your normal operator process.
+
+## 6. Create and Fund the Sponsor
+
+Generate a sponsor key pair if needed:
+
+```bash
+kagami keys --algorithm ed25519 --json
+```
+
+Convert the public key into the account format for your network:
+
+```bash
+iroha tools address convert \
+ --network-prefix \
+
+```
+
+Register the sponsor account through your private onboarding flow:
+
+```bash
+iroha --config ./operator.client.toml \
+ ledger account register --id "$SPONSOR"
+```
+
+Fund the sponsor with XOR from a treasury, claim account, or another funded
+account:
+
+```bash
+iroha --config ./treasury.client.toml \
+ ledger asset transfer \
+ --definition-alias "$XOR_ASSET" \
+ --account "$TREASURY" \
+ --to "$SPONSOR" \
+ --quantity 1000
+```
+
+For Taira rehearsals, save the faucet helper from
+[Get Testnet XOR on Taira](/get-started/sora-nexus-dataspaces.md#_4-get-testnet-xor-on-taira)
+as `taira_faucet_claim.py`, then fund the sponsor with the public faucet
+instead of a treasury transfer:
+
+```bash
+export SPONSOR=''
+export XOR_ASSET=6TEAJqbb8oEPmLncoNiMRbLEK6tw
+
+python3 taira_faucet_claim.py "$SPONSOR"
+
+iroha --config ./sponsor.client.toml \
+ ledger asset get \
+ --definition "$XOR_ASSET" \
+ --account "$SPONSOR"
+```
+
+Check the sponsor's XOR balance:
+
+```bash
+iroha --config ./operator.client.toml \
+ ledger asset get \
+ --definition-alias "$XOR_ASSET" \
+ --account "$SPONSOR"
+```
+
+## 7. Grant a User Access to the Sponsor
+
+The sponsor must grant each user permission to charge fees to it. The grant is
+what prevents users from naming arbitrary sponsor accounts.
+
+Run this as the sponsor account, or as an operational account allowed by your
+runtime policy:
+
+```bash
+printf '{
+ "name": "CanUseFeeSponsor",
+ "payload": {
+ "sponsor": "%s"
+ }
+}\n' "$SPONSOR" |
+ iroha --config ./sponsor.client.toml \
+ ledger account permission grant --id "$USER"
+```
+
+For onboarding services, make this a normal account-provisioning step and log:
+
+- user account
+- sponsor account
+- dataspace or application
+- approval ticket or governance decision
+
+To inspect a user's grants:
+
+```bash
+iroha --config ./operator.client.toml \
+ ledger account permission list --id "$USER"
+```
+
+## 8. Attach Sponsor Metadata
+
+Create a reusable metadata file:
+
+```bash
+printf '{
+ "fee_sponsor": "%s"
+}\n' "$SPONSOR" > sponsored-fee.json
+```
+
+Any write submitted with this metadata is charged to the sponsor:
+
+```bash
+iroha --config ./alice.client.toml \
+ --metadata ./sponsored-fee.json \
+ ledger transaction ping --msg "sponsored private-dataspace write"
+```
+
+For SDKs, attach the same transaction metadata object to the signed
+transaction. The user signs the transaction with the user's key. The sponsor
+does not sign every user transaction because the prior `CanUseFeeSponsor`
+grant is the authorization.
+
+## Pattern 1: Users Pay No Fees
+
+Use this when the application or operator absorbs all network fees.
+
+Developer checklist:
+
+1. Keep the user's normal transaction payload unchanged.
+2. Add transaction metadata with `fee_sponsor`.
+3. Sign as the user.
+4. Submit through the private dataspace route.
+
+The user account does not need a XOR balance. The sponsor account must keep
+enough XOR to cover the configured Nexus fees.
+
+## Pattern 2: Users Pay a Local Token
+
+Use this when users should not hold XOR, but the dataspace still wants an
+internal app fee, credit spend, or quota token.
+
+In this pattern, the local token is an application payment. It is not the
+network fee asset. The sponsor still pays the network fee in XOR.
+
+For example, use a local token in the private dataspace:
+
+```text
+usage#billing.team
+```
+
+Fund users with `usage#billing.team` during onboarding, subscription renewal,
+or quota allocation. Then make the user transaction atomic:
+
+1. transfer local tokens from the user to the sponsor
+2. perform the requested app operation
+3. include `fee_sponsor` metadata so the sponsor pays XOR
+
+A minimal CLI smoke test is just the local-token transfer sponsored by XOR:
+
+```bash
+iroha --config ./alice.client.toml \
+ --metadata ./sponsored-fee.json \
+ ledger asset transfer \
+ --definition-alias "$LOCAL_FEE_ASSET" \
+ --account "$USER" \
+ --to "$SPONSOR" \
+ --quantity 1
+```
+
+For a real app, do not submit the local-token payment as a separate
+best-effort transaction. Build one signed transaction containing both the
+payment and the business instruction, or expose a contract entrypoint that
+collects the local token before applying the business operation.
+
+Keep conversion policy in your app or contract:
+
+- which operation costs how many local token units
+- how local token inflow maps to sponsor XOR top-ups
+- what happens when user balance is too low
+- what happens when sponsor XOR balance is too low
+
+::: warning
+
+Do not use `gas_asset_id` for the "local-token fee" pattern unless you want
+the sponsor to be charged in that gas asset too. In the current runtime,
+`fee_sponsor` also makes the sponsor the payer for configured pipeline-gas
+asset debits. For local-token user fees, collect the token explicitly with a
+transfer or contract rule.
+
+:::
+
+## Debug Failed Sponsored Transactions
+
+Common rejection reasons usually point to one missing setup step:
+
+| Error text | What to check |
+| --- | --- |
+| `fee sponsorship is disabled` | `nexus.fees.sponsorship_enabled` is still `false` on the node. |
+| `fee sponsor is not authorized` | The user does not have `CanUseFeeSponsor` for this sponsor. |
+| `fee asset ... is missing` | The sponsor does not hold the configured XOR fee asset. |
+| `fee balance ... is insufficient` | Top up the sponsor's XOR balance. |
+| `fee exceeds sponsor_max_fee` | Raise `sponsor_max_fee` or reduce transaction size/gas. |
+| `invalid nexus fee asset id` | Fix `nexus.fees.fee_asset_id` or the XOR asset alias. |
+
+When debugging pattern 2, check both balances:
+
+```bash
+iroha --config ./operator.client.toml \
+ ledger asset get \
+ --definition-alias "$XOR_ASSET" \
+ --account "$SPONSOR"
+
+iroha --config ./operator.client.toml \
+ ledger asset get \
+ --definition-alias "$LOCAL_FEE_ASSET" \
+ --account "$USER"
+```
+
+## Operate the Sponsor
+
+Treat the sponsor as a treasury account:
+
+- keep separate sponsor keys for testnet, staging, and mainnet
+- alert before the sponsor XOR balance reaches the admission floor
+- set a non-zero `sponsor_max_fee` cap once traffic is characterized
+- rate-limit sponsored writes in your application or gateway
+- revoke `CanUseFeeSponsor` when users leave the dataspace
+- reconcile user transaction hashes, local-token payments, and sponsor XOR
+ debits
+
+Revoke sponsorship for a user:
+
+```bash
+printf '{
+ "name": "CanUseFeeSponsor",
+ "payload": {
+ "sponsor": "%s"
+ }
+}\n' "$SPONSOR" |
+ iroha --config ./sponsor.client.toml \
+ ledger account permission revoke --id "$USER"
+```
+
+## Related Pages
+
+- [Connect to SORA Nexus Dataspaces](/get-started/sora-nexus-dataspaces.md)
+- [Operate Iroha 3 via CLI](/get-started/operate-iroha-2-via-cli.md)
+- [Assets](/blockchain/assets.md)
+- [Permissions](/blockchain/permissions.md)
+- [Permission Tokens](/reference/permissions.md)
diff --git a/src/get-started/sora-nexus-dataspaces.md b/src/get-started/sora-nexus-dataspaces.md
new file mode 100644
index 000000000..3a2bbf3e0
--- /dev/null
+++ b/src/get-started/sora-nexus-dataspaces.md
@@ -0,0 +1,823 @@
+# Build on SORA 3: Taira and Minamoto
+
+SORA 3 is the app-facing public deployment track built on Iroha 3 and SORA
+Nexus. Build and rehearse on Taira first, then move the same client shape
+to Minamoto only when you have separate mainnet keys, real XOR for fees,
+and production approval.
+
+This tutorial shows how to configure an Iroha client for the public SORA 3
+networks:
+
+- Taira testnet at `https://taira.sora.org`
+- Minamoto mainnet at `https://minamoto.sora.org`
+
+Use Taira for integration tests, faucet-funded write canaries, and
+deployment rehearsals. Use Minamoto only for production-ready mainnet
+activity. Both networks charge fees in XOR:
+
+- Taira uses testnet XOR from the public faucet.
+- Minamoto uses real XOR. There is no Minamoto faucet.
+
+## Builder Path
+
+| Step | Taira Testnet | Minamoto Mainnet |
+| --------------------------- | ------------------------------------------------------------ | -------------------------------------------------- |
+| Start reading network state | Query `/status` without keys | Query `/status` without keys |
+| Pick a dataspace | Use public `universal` unless your app needs a governed lane | Use the same dataspace only after mainnet approval |
+| Get fee asset | Use the public Taira faucet | Obtain real XOR, then claim it into Minamoto |
+| Test writes | Use faucet-funded test XOR | Do not use test tooling; writes spend real XOR |
+| Promote | Keep retry logic, monitoring, and signer handling | Use separate keys, funding, and release controls |
+
+The practical flow is:
+
+1. Build the client against Taira and use the public `universal` dataspace.
+2. Add a signer and fund it with the Taira faucet.
+3. Exercise your app logic against Taira until failures are boring and
+ observable.
+4. Create a separate Minamoto signer, fund it with real XOR, and move only
+ the same proven operations to mainnet.
+
+## 1. Understand What You Are Setting Up
+
+In SORA Nexus, a dataspace is part of the network lane and routing catalog.
+A client does not create a new public dataspace just by changing
+`client.toml`. Client setup does two things:
+
+1. points the client at the right Torii endpoint
+2. selects an account domain that belongs to an existing dataspace
+
+For most applications, start with the public `universal` dataspace. Account
+domains use `domain.dataspace` form, for example:
+
+```text
+wonderland.universal
+```
+
+If you need a new organizational dataspace, prepare a catalog and routing
+proposal instead of trying to register it from an ordinary client account.
+See [Provision a New Dataspace](#_8-provision-a-new-dataspace) below.
+
+## 2. Check the Public Torii Endpoint
+
+Check that the target endpoint is live before configuring a signer.
+
+For Taira:
+
+```bash
+curl -fsS https://taira.sora.org/status \
+ | jq '{peers, blocks, txs_approved, queue_size}'
+```
+
+For Minamoto:
+
+```bash
+curl -fsS https://minamoto.sora.org/status \
+ | jq '{peers, blocks, txs_approved, queue_size}'
+```
+
+Inspect the dataspace and lane view exposed by the node:
+
+```bash
+curl -fsS https://taira.sora.org/status \
+ | jq '.teu_lane_commit[] | {lane_id, alias, dataspace_id, dataspace_alias, visibility}'
+```
+
+Use the same command with `https://minamoto.sora.org/status` for mainnet.
+
+## Taira MCP for Agents
+
+Taira also exposes a Torii-native Model Context Protocol (MCP) bridge for
+agent runtimes. Use it when an agent needs live testnet reads, scripted
+diagnostics, or tightly reviewed write rehearsals without building a custom
+Torii client first.
+
+| Setting | Value |
+| --- | --- |
+| MCP endpoint | `https://taira.sora.org/v1/mcp` |
+| Network root | `https://taira.sora.org` |
+| Intended use | Taira testnet reads and faucet-funded write rehearsals |
+| Production equivalent | Do not point this entry at Minamoto unless a mainnet MCP endpoint and release controls are explicitly approved |
+
+Check the bridge metadata before adding signing material:
+
+```bash
+curl -fsS https://taira.sora.org/v1/mcp \
+ | jq '{protocolVersion, server: .serverInfo.name, tools: .capabilities.tools.count}'
+```
+
+Configure the URL as a user-local MCP server in the agent runtime. Do not
+commit agent MCP config, API tokens, forwarded auth headers, `authority`, or
+`private_key` values into this docs repo or an application repo.
+
+Agent prompt rules that work well with Taira:
+
+- Discover tools from the MCP server before calling them; re-discover if the
+ server reports `listChanged`.
+- Prefer the curated `iroha.*` tools over raw `torii.*` tools.
+- Start read-only: inspect status, accounts, assets, aliases, blocks,
+ governance state, and transaction status before proposing writes.
+- Require an explicit human instruction before live testnet mutations. For
+ pre-signed transaction envelopes, use `iroha.transactions.submit_and_wait`
+ so the agent waits for the result instead of only submitting.
+- Summarize transaction hashes, final status, and server validation errors in
+ the agent response.
+
+### Development Workflow With Agents
+
+Use agents as development helpers for Iroha clients, transaction builders,
+diagnostic scripts, and testnet runbooks. Keep the agent's authority narrow:
+it can inspect code, read Taira state, propose changes, and run local tests,
+but it should not mutate a live network until a human approves the exact
+operation.
+
+A practical workflow is:
+
+1. Ask the agent to inspect the relevant docs, SDK code, CLI command, or MCP
+ tool schema before it writes code.
+2. Have the agent write the smallest client path first: status check, account
+ lookup, alias resolution, or balance lookup.
+3. Add transaction-building code only after read-only calls work against
+ Taira.
+4. Keep live-network tests opt-in, for example behind `TAIRA_LIVE=1`, so a
+ normal unit test run never spends testnet funds or depends on network
+ availability.
+5. Require the agent to report the network root, chain, authority account,
+ instruction summary, fee asset, and expected state change before it submits
+ any transaction.
+6. Review generated code for secret handling, retry behavior, idempotency, and
+ rejection handling before promoting it to CI or mainnet workflows.
+
+Useful read-only MCP tools for development include account asset lookups,
+alias resolution, block lookup, transaction lookup, transaction lists, and
+pipeline status checks. Use these to build confidence before submitting any
+signed payload.
+
+```text
+Use Taira MCP as a read-only inspector while developing this Iroha feature.
+Inspect available iroha.* tools, verify the target account and asset state,
+then update the client code. Do not submit transactions unless I explicitly
+say "submit this transaction".
+```
+
+### Transaction Workflow Through Agents
+
+The MCP bridge can submit a signed Iroha transaction, but it does not remove
+the normal transaction requirements. A transaction still needs a correct
+authority, permissions, fee funding, chain ID, metadata, and signature.
+
+For raw Iroha transactions, build and sign the transaction envelope with an
+SDK or CLI first, then give the agent only the canonical signed transaction
+bytes encoded as `body_base64`. The agent can submit the envelope with
+`iroha.transactions.submit_and_wait`, or submit with
+`iroha.transactions.submit` and poll with `iroha.transactions.wait`.
+
+Do not paste private keys into an agent prompt. If an agent needs to build a
+transaction, point it at local code that loads secrets from the user's runtime
+environment, keychain, hardware signer, or ignored testnet config file. The
+agent should never write the key material into Markdown, fixtures, logs, or
+commits.
+
+Before submitting a transaction, make the agent produce a short transaction
+plan:
+
+- `network`: Taira testnet root and chain ID
+- `authority`: account that signs and pays fees
+- `instructions`: register, mint, burn, transfer, metadata, permission, or
+ contract call summary
+- `fee asset`: asset that will be charged on Taira
+- `preflight reads`: account, asset balance, permissions, alias, or block
+ checks already performed
+- `expected result`: the state that should be visible after confirmation
+- `idempotency`: what happens if the same request is retried
+
+After submission, make the agent wait for a terminal status, then verify the
+state change with a read query. A useful completion report includes:
+
+- transaction hash
+- terminal status such as `Committed`, `Applied`, `Rejected`, or `Expired`
+- block or explorer detail when available
+- verification read results
+- rejection message and whether the failure looks like permissions, fees,
+ validation, stale state, or endpoint availability
+
+Example guarded prompt:
+
+```text
+Prepare a Taira transaction plan, but do not submit yet. Use MCP reads to
+verify the authority account, fee balance, target asset or alias, and current
+transaction status if a hash already exists. Show the exact instructions and
+expected post-state. Wait for my explicit "submit" message before calling
+iroha.transactions.submit_and_wait.
+```
+
+When the signed envelope is already prepared:
+
+```text
+Submit this pre-signed Taira transaction envelope with
+iroha.transactions.submit_and_wait. Use the provided body_base64 only; do not
+ask for private keys. Wait for a terminal status, then verify the resulting
+state with read-only iroha.* tools and report the hash, status, and
+verification result.
+```
+
+Treat Taira MCP as a public testnet control surface. Taira keys, testnet XOR,
+faucet accounts, and canary signers are disposable and must stay separate from
+Minamoto keys and production release workflows.
+
+## Toy Examples You Can Try Now
+
+These examples are read-only unless noted. They work before you generate
+keys and are safe to run against both public networks.
+
+Compare Taira testnet and Minamoto mainnet health:
+
+```bash
+for network in taira minamoto; do
+ root="https://$network.sora.org"
+ printf '\n%s\n' "$network"
+ curl -fsS "$root/status" \
+ | jq '{blocks, txs_approved, txs_rejected, queue_size, peers}'
+done
+```
+
+List the public dataspace lanes exposed by Taira:
+
+```bash
+curl -fsS https://taira.sora.org/status \
+ | jq -r '.teu_lane_commit[]
+ | [.lane_id, .alias, .dataspace_alias, .visibility, .storage_profile, .block_height]
+ | @tsv'
+```
+
+Run the same command against Minamoto when you need the mainnet view:
+
+```bash
+curl -fsS https://minamoto.sora.org/status \
+ | jq -r '.teu_lane_commit[]
+ | [.lane_id, .alias, .dataspace_alias, .visibility, .storage_profile, .block_height]
+ | @tsv'
+```
+
+Build a tiny Node.js status probe for a dashboard, bot, or deployment
+check:
+
+```bash
+node --input-type=module <<'EOF'
+const roots = {
+ taira: 'https://taira.sora.org',
+ minamoto: 'https://minamoto.sora.org',
+};
+
+for (const [name, root] of Object.entries(roots)) {
+ const status = await fetch(`${root}/status`).then((res) => res.json());
+ const publicSpaces = status.teu_lane_commit
+ .filter((lane) => lane.visibility === 'public')
+ .map((lane) => `${lane.dataspace_alias}:${lane.block_height}`)
+ .join(', ');
+
+ console.log(
+ `${name}: ${status.blocks} blocks, ${status.queue_size} queued, public spaces ${publicSpaces}`,
+ );
+}
+EOF
+```
+
+The first write-side toy should be a Taira faucet claim. It uses testnet
+XOR and should never be pointed at Minamoto.
+
+## 3. Create a Taira Client Config
+
+Generate a keypair if you do not already have one:
+
+```bash
+kagami keys --algorithm ed25519 --json
+```
+
+Create `taira.client.toml`:
+
+```toml
+chain = "809574f5-fee7-5e69-bfcf-52451e42d50f"
+torii_url = "https://taira.sora.org/"
+
+[account]
+domain = "wonderland.universal"
+public_key = ""
+private_key = ""
+chain_discriminant = 369
+
+[transaction]
+time_to_live_ms = 100000
+status_timeout_ms = 15000
+nonce = false
+```
+
+Recent CLI builds infer the Taira `chain_discriminant` from the `chain`
+value or from `taira.sora.org`, but keeping it explicit makes the config
+portable across older builds.
+
+Run a read-only check:
+
+```bash
+iroha --config ./taira.client.toml --output-format text ops sumeragi status
+```
+
+Run the public Taira diagnostics before write tests:
+
+```bash
+iroha taira doctor --public-root https://taira.sora.org --json
+```
+
+Fund the Taira account through the faucet before you run fee-paying writes.
+The direct faucet flow is in
+[Get Testnet XOR on Taira](#_4-get-testnet-xor-on-taira).
+
+After the faucet claim is accepted and the account is funded, the Taira
+canary is an optional write smoke test:
+
+```bash
+iroha --config ./taira.client.toml taira write-canary \
+ --public-root https://taira.sora.org \
+ --write-config ./taira.canary.client.toml \
+ --json
+```
+
+The canary submits a signed ping, waits for confirmation, and writes the
+runtime signer config when `--write-config` is provided. Taira is a public
+testnet, so queue saturation can make the signed ping fail even when the
+faucet itself works. If `taira doctor` reports a saturated queue or the
+canary returns `PRTRY:NEXUS_FEE_ADMISSION_REJECTED`, wait and retry before
+treating it as a client configuration error.
+
+For unattended smoke tests, wrap the canary in a bounded retry loop:
+
+```bash
+ok=false
+for attempt in 1 2 3 4 5; do
+ iroha --config ./taira.client.toml taira write-canary \
+ --public-root https://taira.sora.org \
+ --write-config ./taira.canary.client.toml \
+ --json && ok=true && break
+
+ sleep 60
+done
+
+test "$ok" = true
+```
+
+Stop retrying if `iroha taira doctor` shows hard failures. Queue saturation
+and fee-admission rejections are transient public-testnet conditions; DNS,
+TLS, or `status = "fail"` diagnostics are not.
+
+## Generate a SORA Nexus Account ID
+
+A SORA Nexus account ID is a canonical I105 address derived from the
+account public key and the target network prefix. It is not the
+`[account].domain` value in client TOML. The same public key encodes to
+different IDs on Taira and Minamoto, and production users should generate a
+separate keypair for Minamoto.
+
+Generate or load the Ed25519 keypair that will control the account:
+
+```bash
+kagami keys --algorithm ed25519 --json
+```
+
+Convert the public key into a Taira account ID:
+
+```bash
+iroha tools address convert --network-prefix 369
+```
+
+Convert a Minamoto public key with the mainnet prefix:
+
+```bash
+iroha tools address convert --network-prefix 753
+```
+
+Use the resulting account ID wherever a Nexus API or CLI command asks for a
+canonical account ID, for example the Taira faucet `account_id`, balance
+queries, strict account fields, or alias bindings. Keep the matching
+private key in your client config, and keep `chain_discriminant` aligned
+with the same prefix: `369` for Taira and `753` for Minamoto.
+
+Generating the ID does not by itself create a funded on-chain account. On
+Taira, the faucet can create and fund the account for testnet writes. On
+Minamoto, use the approved mainnet onboarding, claim, or treasury flow.
+
+### Key Storage and Backup
+
+The account ID and public key can be shared. The matching private key,
+passphrase, seed, and recovery material must be treated as secret.
+
+Use these practices for SORA Nexus accounts:
+
+- Store private keys in an encrypted password manager, hardware-backed
+ keystore, or dedicated signing service. Do not commit keys to source
+ control or leave production keys in shell history, logs, chat, tickets,
+ or unencrypted backups.
+- Use a unique high-entropy passphrase for each vault or production signer.
+ Store passphrases in a password manager or split custody process, not in
+ the same file or backup bundle as the encrypted private key.
+- Keep Taira and Minamoto keys separate. Treat Taira keys as disposable
+ testnet material and Minamoto keys as production funds authority.
+- Back up the private key, public key, account ID, network prefix,
+ `chain_discriminant`, and any account recovery or custody notes needed to
+ restore the signer. A private key without the network context is easy to
+ misuse during recovery.
+- Keep at least one encrypted offline backup and one geographically
+ separate encrypted backup for production signers. Test recovery with a
+ small read-only operation before depending on the backup.
+- Rotate or replace a signer if the private key, passphrase, backup media,
+ or signing host may have been exposed.
+
+For more detail, see
+[Storing Cryptographic Keys](/guide/security/storing-cryptographic-keys.md)
+and [Password Security](/guide/security/password-security.md).
+
+## 4. Get Testnet XOR on Taira
+
+Use the public faucet directly. The flow is:
+
+1. Generate or load a signer and compute its canonical Taira account ID.
+2. Fetch the current faucet puzzle.
+3. Solve the puzzle if `difficulty_bits` is greater than `0`.
+4. Submit the faucet claim.
+5. Wait for the account or asset balance to become visible before sending
+ fee-paying writes.
+
+Convert a public key into the Taira I105 account ID expected by the faucet:
+
+```bash
+iroha tools address convert --network-prefix 369
+```
+
+Fetch the puzzle:
+
+```bash
+curl -fsS https://taira.sora.org/v1/accounts/faucet/puzzle | jq .
+```
+
+The faucet is a public testnet service. If the puzzle or claim endpoint
+returns `502`, a timeout, or another gateway-level error, wait and retry
+before changing your keys or client config.
+
+The response has this shape:
+
+```json
+{
+ "algorithm": "scrypt-leading-zero-bits-v1",
+ "difficulty_bits": 8,
+ "anchor_height": 741,
+ "anchor_block_hash_hex": "05d2...",
+ "challenge_salt_hex": null,
+ "scrypt_log_n": 13,
+ "scrypt_r": 8,
+ "scrypt_p": 1,
+ "max_anchor_age_blocks": 6
+}
+```
+
+When `difficulty_bits` is `0`, submit only the account ID:
+
+```bash
+curl -fsS https://taira.sora.org/v1/accounts/faucet \
+ -H 'content-type: application/json' \
+ -d '{"account_id":""}'
+```
+
+When `difficulty_bits` is greater than `0`, solve the puzzle and include
+the anchor height plus nonce:
+
+```bash
+curl -fsS https://taira.sora.org/v1/accounts/faucet \
+ -H 'content-type: application/json' \
+ -d '{
+ "account_id": "",
+ "pow_anchor_height": 741,
+ "pow_nonce_hex": ""
+ }'
+```
+
+The puzzle algorithm is:
+
+1. Build the challenge as SHA-256 over:
+ - the bytes of `iroha:accounts:faucet:pow:v2`
+ - the UTF-8 account ID
+ - `anchor_height` as big-endian `u64`
+ - `anchor_block_hash_hex` decoded as bytes
+ - `challenge_salt_hex` decoded as bytes, when present
+2. Try `u64` nonces encoded as big-endian 8-byte values.
+3. For each nonce, run scrypt with:
+ - password: the 8-byte nonce
+ - salt: the 32-byte challenge
+ - `N = 2^scrypt_log_n`
+ - `r = scrypt_r`
+ - `p = scrypt_p`
+ - output length: 32 bytes
+4. The winning nonce is the first digest with at least `difficulty_bits`
+ leading zero bits.
+
+The faucet response includes the funded asset and queued transaction hash:
+
+```json
+{
+ "account_id": "",
+ "asset_definition_id": "6TEAJqbb8oEPmLncoNiMRbLEK6tw",
+ "asset_id": "...",
+ "amount": "25000",
+ "tx_hash_hex": "...",
+ "status": "QUEUED"
+}
+```
+
+The response is currently returned with HTTP `202 Accepted`. The asset
+definition ID above is the Taira fee asset funded by the public faucet. The
+faucet has accepted the request when it returns `tx_hash_hex` and
+`status: "QUEUED"`.
+
+Then poll for the funded asset before submitting your own fee-paying
+transactions:
+
+```bash
+iroha --config ./taira.client.toml ledger asset get \
+ --definition 6TEAJqbb8oEPmLncoNiMRbLEK6tw \
+ --account
+```
+
+If the faucet claim was accepted but the account or asset is not visible
+yet, the transaction is still behind public testnet queue processing. Wait
+and retry the read before sending writes.
+
+For a ready-to-run direct API check, save this as `taira_faucet_claim.py`
+and pass the Taira I105 account ID:
+
+```python
+#!/usr/bin/env python3
+import hashlib
+import json
+import sys
+import urllib.request
+
+
+def has_leading_zero_bits(digest: bytes, bits: int) -> bool:
+ full, rem = divmod(bits, 8)
+ if digest[:full] != b"\0" * full:
+ return False
+ return rem == 0 or digest[full] >> (8 - rem) == 0
+
+
+root = "https://taira.sora.org"
+account_id = sys.argv[1]
+
+with urllib.request.urlopen(f"{root}/v1/accounts/faucet/puzzle") as res:
+ puzzle = json.load(res)
+
+claim = {"account_id": account_id}
+difficulty = int(puzzle["difficulty_bits"])
+
+if difficulty > 0:
+ challenge = hashlib.sha256()
+ challenge.update(b"iroha:accounts:faucet:pow:v2")
+ challenge.update(account_id.encode())
+ challenge.update(int(puzzle["anchor_height"]).to_bytes(8, "big"))
+ challenge.update(bytes.fromhex(puzzle["anchor_block_hash_hex"]))
+ if puzzle.get("challenge_salt_hex"):
+ challenge.update(bytes.fromhex(puzzle["challenge_salt_hex"]))
+
+ n = 1 << int(puzzle["scrypt_log_n"])
+ r = int(puzzle["scrypt_r"])
+ p = int(puzzle["scrypt_p"])
+ salt = challenge.digest()
+
+ for nonce in range(1_000_000):
+ nonce_bytes = nonce.to_bytes(8, "big")
+ digest = hashlib.scrypt(nonce_bytes, salt=salt, n=n, r=r, p=p, dklen=32)
+ if has_leading_zero_bits(digest, difficulty):
+ claim["pow_anchor_height"] = puzzle["anchor_height"]
+ claim["pow_nonce_hex"] = nonce_bytes.hex()
+ break
+ else:
+ raise SystemExit("faucet nonce not found")
+
+request = urllib.request.Request(
+ f"{root}/v1/accounts/faucet",
+ data=json.dumps(claim).encode(),
+ headers={"content-type": "application/json"},
+ method="POST",
+)
+
+with urllib.request.urlopen(request) as res:
+ print(json.dumps(json.load(res), indent=2))
+```
+
+The faucet is only for Taira testnet funds. Do not use testnet XOR, faucet
+accounts, or Taira canary signers in Minamoto flows.
+
+## 5. Create a Minamoto Client Config
+
+Use a separate keypair for Minamoto. Do not reuse Taira keys for mainnet.
+
+Create `minamoto.client.toml`:
+
+```toml
+chain = "00000000-0000-0000-0000-000000000753"
+torii_url = "https://minamoto.sora.org/"
+
+[account]
+domain = "wonderland.universal"
+public_key = ""
+private_key = ""
+chain_discriminant = 753
+
+[transaction]
+time_to_live_ms = 100000
+status_timeout_ms = 15000
+nonce = false
+```
+
+The explicit `chain_discriminant = 753` is important for Minamoto configs
+until your CLI or SDK version maps `minamoto.sora.org` automatically.
+
+Convert a Minamoto public key into its canonical I105 account ID with the
+mainnet prefix:
+
+```bash
+iroha tools address convert --network-prefix 753
+```
+
+Run only read-side checks until the account is provisioned and funded
+through the mainnet onboarding or governance flow:
+
+```bash
+iroha --config ./minamoto.client.toml --output-format text ops sumeragi status
+```
+
+Do not run the Taira faucet or write-canary helper against Minamoto.
+
+## 6. Get Real XOR on Minamoto
+
+Minamoto fees are paid with real XOR. Before submitting write transactions,
+fund the configured Minamoto account with XOR through an approved mainnet
+path.
+
+First obtain XOR on SORA 2, then move it into Minamoto. Common mainnet
+paths are:
+
+- receive XOR from an existing funded SORA 2 account
+- use [SORA Wallet](https://sora.org/wallet) to hold, receive, and swap
+ supported SORA assets
+- use [Polkaswap](https://sora.org/polkaswap) to swap supported assets into
+ XOR on the SORA network
+
+The SORA wiki describes XOR as the SORA network utility token used for
+transaction fees, and the Polkaswap swap guide explains the normal
+source-asset to destination-asset swap flow. Check route, slippage, and
+fees before signing. This documentation does not recommend a specific
+exchange, bridge, or trade size.
+
+After you have XOR on SORA 2, use the burn-backed Minamoto launch path
+documented in the
+[SORA Nexus Minamoto Mainnet Launch](https://sora-xor.medium.com/sora-nexus-minamoto-mainnet-launch-5ef2819a5deb)
+post:
+
+1. Burn XOR on SORA 2 using the published burn interface:
+ `https://bafybeicmlt7f757a64kw2tzmtnmlgpahs7dlmu3nmjssjbbywre6x3nvr4.ipfs.dweb.link/#/burn`
+2. Use only burns from SORA 2 block `25,867,650` onward for the Minamoto
+ claim flow.
+3. Claim the burned XOR on Minamoto through the SORAFS claim application:
+ `https://minamoto.sora.org/claim`
+4. Send a small read-side or balance check first, then use the funded
+ account for fee-paying writes.
+
+You can also receive real Minamoto XOR from an existing funded Minamoto
+account or an approved operational treasury. Treat Minamoto XOR like
+production funds: test on Taira first, keep separate keys, and do not
+assume transactions can be reset.
+
+Do not treat the Taira faucet as a real-XOR source. Testnet XOR cannot pay
+Minamoto fees and cannot be upgraded into mainnet XOR.
+
+## 7. Work Inside an Existing Dataspace
+
+Use fully qualified domain names for ledger objects that live inside a
+dataspace. For example, a project domain in the public dataspace should
+use:
+
+```text
+apps.universal
+```
+
+After your account has the required permissions, register the domain:
+
+```bash
+iroha --config ./taira.client.toml ledger domain register --id apps.universal
+```
+
+Use the Minamoto config only when the same write is approved for mainnet:
+
+```bash
+iroha --config ./minamoto.client.toml ledger domain register --id apps.universal
+```
+
+Account aliases use the same dataspace suffix:
+
+```text
+alice@apps.universal
+alice@universal
+```
+
+Strict account fields still use canonical I105 account IDs. Treat aliases
+as human-readable bindings that resolve to canonical account IDs.
+
+## 8. Provision a New Dataspace
+
+A new dataspace is an operator and governance change. The public Torii
+endpoint can route traffic to configured dataspaces, but it will reject
+unknown dataspace aliases.
+
+Before preparing a change, capture the current live catalog:
+
+```bash
+curl -fsS https://taira.sora.org/status \
+ | jq '.teu_lane_commit[] | {lane_id, alias, dataspace_id, dataspace_alias, visibility}'
+```
+
+For an operator account, also check the lane manifest posture:
+
+```bash
+iroha --config ./operator.client.toml app nexus lane-report --summary
+```
+
+Do not promote a new alias unless the lane ID, dataspace ID, validator set,
+fault tolerance, manifest, routing rules, and operational owner have been
+reviewed together. A normal user account can register domains inside an
+existing dataspace; it cannot safely add a new public dataspace.
+
+For a private or organizational dataspace, prepare a catalog change with:
+
+- a unique dataspace alias and numeric `id`
+- a matching lane entry or an existing lane assignment
+- the dataspace `fault_tolerance`
+- routing rules for the instructions or account scopes that should land
+ there
+- a Space Directory manifest or equivalent rollout evidence, when the
+ dataspace exposes UAID capabilities
+- governance approval for validator, compliance, settlement, and monitoring
+ policy
+
+A reviewable config fragment looks like this:
+
+```toml
+[[nexus.lane_catalog]]
+index = 5
+alias = "payments"
+description = "Payments lane"
+dataspace = "payments"
+visibility = "public"
+metadata = {}
+
+[[nexus.dataspace_catalog]]
+alias = "payments"
+id = 20
+description = "Payments dataspace"
+fault_tolerance = 1
+
+[[nexus.routing_policy.rules]]
+lane = 5
+dataspace = "payments"
+[nexus.routing_policy.rules.matcher]
+account_prefix = "payments."
+description = "Route payments domains to the payments dataspace"
+```
+
+Operator acceptance should include these gates:
+
+- `irohad --sora --config --trace-config` passes on the
+ resolved node configuration
+- the generated or reviewed manifest is archived with hashes and signatures
+- smoke tests pass on Taira before any Minamoto promotion
+- the post-change `/status` catalog shows the intended lane and dataspace
+- `iroha app nexus lane-report --summary` does not report missing required
+ manifests
+
+```bash
+curl -fsS https://taira.sora.org/status \
+ | jq '.teu_lane_commit[] | select(.dataspace_alias == "payments")'
+```
+
+Promote the same dataspace to Minamoto only after the Taira deployment,
+smoke tests, monitoring, and governance evidence are complete.
+
+## Related Pages
+
+- [Install Iroha 3](/get-started/install-iroha-2.md)
+- [Operate Iroha 3 via CLI](/get-started/operate-iroha-2-via-cli.md)
+- [Sponsor fees for a private dataspace](/get-started/private-dataspace-fee-sponsor.md)
+- [Torii endpoints](/reference/torii-endpoints.md)
+- [Genesis reference](/reference/genesis.md)
+- [SORA Taira Testnet](https://medium.com/sora-xor/sora-taira-testnet-be8cfc924b58)
+- [SORA Nexus Minamoto Mainnet Launch](https://sora-xor.medium.com/sora-nexus-minamoto-mainnet-launch-5ef2819a5deb)
+- [SORA Wallet](https://sora.org/wallet)
+- [Polkaswap](https://sora.org/polkaswap)
+- [SORA XOR token](https://wiki.sora.org/xor.html)
+- [Swap on the SORA Network](https://wiki.sora.org/swap)
diff --git a/src/guide/advanced/chaos-testing.md b/src/guide/advanced/chaos-testing.md
new file mode 100644
index 000000000..21333c6e8
--- /dev/null
+++ b/src/guide/advanced/chaos-testing.md
@@ -0,0 +1,216 @@
+# Chaos Testing with Izanami
+
+Izanami is the chaosnet orchestrator in the upstream Iroha workspace. It
+starts a disposable local Iroha cluster, submits a configurable workload,
+and injects faults into selected peers so operators can check whether the
+network keeps making progress under controlled failure.
+
+Use Izanami for pre-production resilience checks, regression reproduction,
+and consensus tuning. Do not point it at a production network: the tool is
+designed to own the peers it starts, including peer restarts, storage
+wipes, artificial packet loss, and local CPU or disk pressure.
+
+## Prerequisites
+
+Run Izanami from the
+[`i23-features` branch of the Iroha repository](https://github.com/hyperledger-iroha/iroha/tree/i23-features),
+not from this documentation repository:
+
+```bash
+git clone --branch i23-features https://github.com/hyperledger-iroha/iroha.git
+cd iroha
+cargo build -p izanami
+```
+
+The binary must be explicitly allowed to create and manipulate networked
+peers. Pass `--allow-net` for every non-TUI run, or enable `allow_net` in
+the TUI.
+
+```bash
+cargo run -p izanami -- --allow-net --peers 4 --faulty 1 --duration 120s
+```
+
+For an interactive run configuration:
+
+```bash
+cargo run -p izanami -- --tui --allow-net
+```
+
+Izanami persists TUI and CLI settings under the user config directory, so
+review the displayed settings before reusing a previous profile.
+
+## Baseline Run
+
+Start with one reproducible baseline before adding severe faults:
+
+```bash
+cargo run -p izanami -- \
+ --allow-net \
+ --peers 4 \
+ --faulty 1 \
+ --duration 5m \
+ --target-blocks 100 \
+ --progress-interval 15s \
+ --progress-timeout 120s \
+ --latency-p95-threshold 2s \
+ --tps 15 \
+ --max-inflight 32 \
+ --submitters 1 \
+ --seed 42
+```
+
+This run succeeds only if the cluster reaches the requested block target,
+keeps making progress within the timeout, and stays under the optional p95
+block interval threshold.
+
+Record the command, seed, Iroha commit, peer count, faulty-peer count,
+workload profile, target TPS, and latency threshold with the logs. Without
+these values, another operator cannot replay the same failure pattern.
+
+## Workload Profiles
+
+Izanami has two workload profiles:
+
+| Profile | Use it for | Notes |
+| -------- | -------------------------------------------------- | -------------------------------------- |
+| `stable` | Long soak runs and reproducible performance checks | Favors execution-safe recipes |
+| `chaos` | Failure-path coverage | Includes intentionally invalid recipes |
+
+Use the stable profile first:
+
+```bash
+cargo run -p izanami -- --allow-net --workload-profile stable --seed 42
+```
+
+Switch to the chaos profile when the baseline is already understood:
+
+```bash
+cargo run -p izanami -- --allow-net --workload-profile chaos --seed 42
+```
+
+Contract deployment recipes are disabled in stable runs unless explicitly
+allowed:
+
+```bash
+cargo run -p izanami -- \
+ --allow-net \
+ --workload-profile stable \
+ --allow-contract-deploy-in-stable
+```
+
+Use `--nexus` when the run should use the embedded SORA Nexus defaults from
+the upstream workspace.
+
+## Fault Controls
+
+When `--faulty` is greater than zero, at least one fault scenario must be
+enabled. Fault toggles default to enabled, and boolean flags can be
+disabled with `=false`.
+
+| Fault | CLI flag | What it exercises |
+| ------------------------ | ------------------------------------------ | ------------------------------------------ |
+| Crash and restart | `--fault-enable-crash-restart` | Peer process loss and recovery |
+| Wipe storage and restart | `--fault-enable-wipe-storage` | Recovery from missing local state |
+| Invalid transaction spam | `--fault-enable-spam-invalid-transactions` | Admission and rejection paths |
+| Network latency | `--fault-enable-network-latency` | Slow gossip and delayed consensus messages |
+| Network partition | `--fault-enable-network-partition` | Temporary trusted-peer isolation |
+| P2P packet loss | `--fault-enable-network-packet-loss` | Dropped application-frame traffic |
+| CPU stress | `--fault-enable-cpu-stress` | Local validation and scheduling pressure |
+| Disk saturation | `--fault-enable-disk-saturation` | Local storage pressure |
+
+For a packet-loss-only run:
+
+```bash
+cargo run -p izanami -- \
+ --allow-net \
+ --peers 20 \
+ --faulty 5 \
+ --duration 800s \
+ --fault-window-start 133s \
+ --fault-window-end 266s \
+ --tps 200 \
+ --submitters 20 \
+ --max-inflight 512 \
+ --fault-enable-crash-restart=false \
+ --fault-enable-wipe-storage=false \
+ --fault-enable-spam-invalid-transactions=false \
+ --fault-enable-network-latency=false \
+ --fault-enable-network-partition=false \
+ --fault-enable-network-packet-loss=true \
+ --fault-enable-cpu-stress=false \
+ --fault-enable-disk-saturation=false \
+ --fault-network-packet-loss-percent 75 \
+ --seed 42
+```
+
+Use `--fault-window-start` and `--fault-window-end` to keep a controlled
+steady-state period before and after the injected failure. This makes it
+easier to distinguish startup noise from the effect of the fault.
+
+## Scenario Shapes
+
+The upstream Izanami catalog maps common blockchain communication-failure
+shapes to CLI profiles. You can model them with the same flags:
+
+| Scenario | Typical shape |
+| --------------------- | ------------------------------------------------------------------------------------------------------------------------ |
+| Targeted load | `--faulty 0`, high `--tps`, one submitter, high `--max-inflight` |
+| Transient failure | Enable crash/restart only inside a bounded fault window |
+| Packet loss | Enable packet loss only, usually with the default 75% loss rate |
+| Stopping and recovery | Use a large faulty-peer population with crash/restart |
+| Leader isolation | Use exactly one faulty peer with only network-partition or packet-loss faults; Izanami follows Sumeragi leader telemetry |
+
+Keep one variable fixed at a time. If you change peer count, workload
+profile, fault window, and TPS in the same run, the result is difficult to
+interpret.
+
+## What to Watch
+
+During the run, watch the same signals used for performance validation:
+
+- block-height progress across every running peer
+- submitted, accepted, rejected, and timed-out transactions
+- queue depth, queue saturation, and endpoint backpressure
+- view changes, recovery paths, missing blocks, and missing quorum
+ certificates
+- RBC backlog, pending sessions, and dropped or delayed consensus traffic
+- CPU, memory, disk, and network saturation on the host running the peers
+
+For validation-latency analysis, enable main-loop debug logs:
+
+```bash
+RUST_LOG=iroha_core::sumeragi::main_loop=debug \
+ cargo run -p izanami -- --allow-net --seed 42
+```
+
+Each block should emit `block validation timings` with `stateless_ms`,
+`execution_ms`, and `total_ms`. Compare those timings with p95 block
+intervals, view-change counters, and queue pressure before changing
+consensus timers.
+
+## Interpreting Results
+
+Treat a run as healthy when all selected peers continue to commit blocks,
+backlog does not grow without bound, and faults stop causing new recovery
+activity after the configured window ends.
+
+Treat a run as a failure when:
+
+- block progress stalls longer than `--progress-timeout`
+- peer heights diverge and do not reconverge
+- p95 latency exceeds `--latency-p95-threshold`
+- queues grow for the rest of the run after a fault window closes
+- rejected or timed-out transactions are not explained by the selected
+ workload
+- peer restart, storage wipe, or packet-loss recovery requires manual
+ cleanup
+
+After a failure, rerun with the same seed and one fewer fault type. This
+keeps the workload and timing reproducible while narrowing the failure
+surface.
+
+## Related Pages
+
+- [Performance and Metrics](./metrics.md)
+- [Running Iroha on Bare Metal](./running-iroha-on-bare-metal.md)
+- [Torii endpoints](../../reference/torii-endpoints.md)
diff --git a/src/guide/advanced/hot-reload.md b/src/guide/advanced/hot-reload.md
new file mode 100644
index 000000000..879be6dc2
--- /dev/null
+++ b/src/guide/advanced/hot-reload.md
@@ -0,0 +1,46 @@
+# Hot Reload Iroha in a Docker Container
+
+Use hot reload only for local debugging. For normal local development, prefer
+rebuilding the image or restarting the generated Docker Compose stack from a
+fresh Kagami bundle.
+
+## Replace the Peer Binary
+
+Build a Linux-compatible daemon binary from the upstream workspace:
+
+```bash
+cargo build --release -p irohad --target x86_64-unknown-linux-musl
+```
+
+Copy it into a running peer container, then restart that container:
+
+```bash
+docker cp target/x86_64-unknown-linux-musl/release/irohad :/usr/local/bin/irohad
+docker restart
+```
+
+Use `docker ps` to confirm the container name. In the generated stack the peer
+containers are defined by `./localnet/docker-compose.yml`.
+
+## Recommit Genesis in a Disposable Network
+
+A peer commits genesis only when its storage is empty. For a disposable Docker
+network, stop the stack, remove generated state, regenerate or replace the
+signed genesis bundle, and start again:
+
+```bash
+docker compose -f ./localnet/docker-compose.yml down
+cargo run --bin kagami -- localnet --build-line iroha3 --peers 4 --out-dir ./localnet
+cargo run --bin kagami -- docker --peers 4 --config-dir ./localnet --image hyperledger/iroha:dev --out-file ./localnet/docker-compose.yml --force
+docker compose -f ./localnet/docker-compose.yml up
+```
+
+Do not replace genesis on a network whose state must be preserved.
+
+## Use Custom Configuration
+
+Current peer configuration is TOML. Bind mount or copy the generated
+`config.toml`, `genesis.signed.nrt`, and related key files into the container
+paths expected by the image, then restart the peer. Keep the generated files
+together; mixing files from different Kagami runs can produce deserialization or
+consensus failures.
diff --git a/src/guide/advanced/metrics.md b/src/guide/advanced/metrics.md
new file mode 100644
index 000000000..80e46f1ec
--- /dev/null
+++ b/src/guide/advanced/metrics.md
@@ -0,0 +1,276 @@
+# Performance and Metrics
+
+Iroha performance depends on the workload, validator topology, network
+conditions, and consensus settings. A single TPS number is therefore only useful
+when it is tied to a benchmark run with a fixed configuration.
+
+For capacity planning, treat performance as an operating envelope:
+
+- the network accepts the requested transaction rate
+- commit latency stays inside the target budget
+- transaction queues stay bounded
+- consensus does not rely on repeated view changes or recovery paths
+
+Use this page to estimate whether a deployment is in a high, medium, or low
+performance state for a given node count, network latency threshold, and target
+TPS.
+
+## What to Measure
+
+Start with the operator surfaces exposed by Torii:
+
+```bash
+export TORII=http://127.0.0.1:8180
+
+curl -s "$TORII/status" | jq .
+curl -s -H 'Accept: application/json' "$TORII/v1/sumeragi/status" | jq .
+curl -s "$TORII/v1/sumeragi/phases" | jq .
+curl -s "$TORII/v1/sumeragi/rbc" | jq .
+curl -s "$TORII/v1/sumeragi/params" | jq .
+curl -s "$TORII/metrics" > metrics.prom
+```
+
+You can try the same read-only pattern against public Taira:
+
+```bash
+TAIRA=https://taira.sora.org
+
+curl -fsS "$TAIRA/status" \
+ | jq '{blocks, txs_approved, txs_rejected, queue_size, peers}'
+
+curl -fsS "$TAIRA/v1/time/status" \
+ | jq '{healthy: .health.healthy, peers, samples_used, rtt_count: .rtt.count}'
+
+curl -fsS "$TAIRA/metrics" \
+ | grep -E '^(block_height|queue_size|sumeragi_tx_queue_depth|txs|view_changes)' \
+ | head -n 20
+```
+
+Public Taira metrics are useful for learning the signal names. Do not use them
+as production capacity numbers for your own deployment.
+
+The same consensus snapshots are available through the CLI:
+
+```bash
+iroha --config ./localnet/client.toml --output-format text ops sumeragi status
+iroha --config ./localnet/client.toml --output-format text ops sumeragi phases
+iroha --config ./localnet/client.toml --output-format text ops sumeragi rbc status
+iroha --config ./localnet/client.toml ops sumeragi params
+iroha --config ./localnet/client.toml ops sumeragi collectors
+```
+
+Telemetry visibility depends on the configured profile. Use `extended` when you
+need `/metrics`, and use `full` during test runs when you also need the detailed
+Sumeragi operator routes.
+
+```toml
+telemetry_enabled = true
+telemetry_profile = "full"
+```
+
+## Performance Bands
+
+Use these bands for an observed run at target throughput `Y` TPS and latency
+budget `L` milliseconds. Run the workload long enough to include warm-up,
+steady state, and at least one period of expected peak load.
+
+| Band | Conditions | Meaning |
+| --- | --- | --- |
+| High | Accepted throughput is at or above `Y`, p95 commit latency is below `0.8 * L`, queues remain below 10% of capacity, and view-change/recovery counters are flat | The deployment has headroom for the requested workload |
+| Medium | Accepted throughput is close to `Y`, p95 commit latency is below `L`, queues are stable below 50% of capacity, and view changes are rare | The deployment works, but there is limited burst tolerance |
+| Low | Accepted throughput is below `Y`, p95 commit latency exceeds `L`, queues grow during the run, or view-change/backpressure counters rise continuously | The requested workload exceeds at least one bottleneck |
+
+The key rule is queue direction. If submitted TPS is greater than committed TPS
+and the queue keeps growing, the deployment is overloaded even if short samples
+look healthy.
+
+## Node Count and Quorum
+
+More validators improve fault tolerance but increase coordination, signature,
+and network fanout costs. In the current Sumeragi implementation:
+
+- validator count `n` derives the fault budget `f = floor((n - 1) / 3)`
+- for `n >= 4`, commit quorum is `2f + 1`
+- for `n <= 3`, all validators are required for commit
+- observer peers sync blocks but do not vote, propose, or collect
+
+| Validators | Fault budget | Commit quorum | Capacity note |
+| --- | --- | --- | --- |
+| 1 to 3 | 0 practical offline slack | all validators | Useful for development and small tests; any missing validator can stall commits |
+| 4 | 1 | 3 | Common minimum for one-fault tolerance |
+| 7 | 2 | 5 | More resilient, with more vote and propagation traffic |
+| 10 | 3 | 7 | Higher coordination cost; network and collector tuning matter more |
+
+When evaluating "X nodes", separate voting validators from observers. Adding
+observers usually costs less than adding validators, but observers still consume
+block gossip, block sync, disk, and network bandwidth.
+
+## Factors That Influence Performance
+
+### Workload Shape
+
+The same TPS can be cheap or expensive depending on what each transaction does.
+Record:
+
+- number of instructions per transaction
+- signature count and signing algorithms
+- transaction byte size and decompressed payload size
+- read/write ratio
+- metadata size and asset operations
+- smart contract, trigger, and IVM execution cost
+- query load running against the same peers
+
+Small transfer transactions are not a proxy for contract-heavy or metadata-heavy
+workloads.
+
+### Consensus Timing
+
+Sumeragi timing is controlled by the effective Sumeragi parameters:
+
+- `block_time_ms`
+- `commit_time_ms`
+- `min_finality_ms`
+- `pacing_factor_bps`
+- NPoS phase timeouts when NPoS mode is enabled
+
+Inspect them with:
+
+```bash
+iroha --config ./localnet/client.toml ops sumeragi params
+curl -s "$TORII/v1/sumeragi/params" | jq .
+```
+
+Lower timing targets can improve latency only while the network, storage, and
+execution layers can keep up. Once view changes, missing-payload fetches, or
+backpressure appear, lowering timers usually makes performance worse.
+
+### Collector Fanout
+
+Collector settings affect how quickly commit votes converge:
+
+- `sumeragi.collectors.k` controls how many collectors assemble votes per height
+- `sumeragi.collectors.redundant_send_r` controls additional vote fanout after a
+ local timeout
+- `sumeragi.collectors.parallel_topology_fanout` adds topology fanout alongside
+ collectors
+
+Increasing fanout can reduce tail latency in larger or less reliable networks,
+but it also increases traffic. Compare the collector plan with latency and
+backpressure metrics before changing these values:
+
+```bash
+iroha --config ./localnet/client.toml ops sumeragi collectors
+```
+
+### Network Conditions
+
+Consensus performance is sensitive to:
+
+- RTT between validators
+- jitter and packet loss
+- bandwidth for block payloads and RBC chunks
+- asymmetric links between regions
+- NAT, firewall, or relay behavior that delays peer connectivity
+
+As a planning rule, set the latency budget high enough to cover several
+validator round trips plus execution and disk commit time. If p95 network RTT is
+already close to the desired p95 commit latency, the target is not realistic.
+
+### Queues and Admission Limits
+
+Admission and queue settings define how much burst pressure a peer can absorb:
+
+- `queue.capacity`
+- `queue.capacity_per_user`
+- `queue.transaction_time_to_live_ms`
+- genesis transaction limits such as max signatures, instructions, bytes, and
+ decompressed bytes
+- p2p queue caps and consensus ingress limits
+
+High queue capacity can hide overload for a while, but it does not increase
+sustainable throughput. A stable queue is healthy; a growing queue is a backlog.
+
+### Hardware and Storage
+
+Measure every validator, not only the leader:
+
+- CPU saturation during validation, signature verification, and execution
+- memory pressure from queues, snapshots, and active RBC sessions
+- disk write latency for block storage and snapshots
+- network transmit/receive saturation
+- optional hardware acceleration settings when used by the workload
+
+The slowest voting validator can determine the network's tail latency.
+
+## Prometheus Signals
+
+Metric names can vary by build profile and feature set. Inspect `/metrics` on
+your node first, then build dashboards around the available series.
+
+Common signals include:
+
+| Signal | Prometheus examples | What to watch |
+| --- | --- | --- |
+| Accepted throughput | `sum(rate(txs{type="accepted"}[5m]))` | Should meet or exceed target TPS in steady state |
+| Rejections | `sum(rate(txs{type="rejected"}[5m]))` | Should be explainable by the test plan |
+| Commit latency | `histogram_quantile(0.95, sum(rate(commit_time_ms_bucket[5m])) by (le))` | Compare p95/p99 with the latency budget |
+| Queue depth | `queue_size`, `sumeragi_tx_queue_depth` | Should stay bounded during peak load |
+| Queue saturation | `sumeragi_tx_queue_saturated` | Sustained non-zero values mean overload |
+| View changes | `view_changes`, `sumeragi_view_change_suggest_total`, `sumeragi_view_change_install_total` | Rising values indicate timing, topology, payload, or network trouble |
+| Dropped messages | `dropped_messages`, `sumeragi_consensus_message_handling_total` | Drops during load usually explain latency spikes |
+| RBC pressure | `sumeragi_rbc_store_pressure`, `sumeragi_rbc_backpressure_deferrals_total` | Non-zero pressure points to payload recovery or storage bottlenecks |
+| Commit quorum | `sumeragi_commit_signatures_counted`, `sumeragi_commit_signatures_required` | Counted signatures should reach the required quorum quickly |
+
+When a metric exists only in `/v1/sumeragi/status`, capture the JSON snapshot in
+the same run artefacts as the Prometheus scrape.
+
+## Estimation Workflow
+
+1. Define the scenario:
+ - validator count and observer count
+ - consensus mode
+ - target TPS
+ - p95 and p99 commit-latency budgets
+ - transaction mix
+ - expected network RTT, jitter, and bandwidth
+2. Record the effective configuration:
+
+ ```bash
+ iroha --config ./localnet/client.toml --output-format json ops sumeragi params \
+ > artifacts/sumeragi-params.json
+ curl -s "$TORII/v1/sumeragi/collectors" \
+ > artifacts/sumeragi-collectors.json
+ ```
+
+3. Run the workload at the target TPS.
+4. Capture status and metrics at the start, middle, and end of the run.
+5. Classify the run with the performance-band table.
+6. If the band is Medium or Low, change one factor at a time and repeat.
+
+## Benchmark Report Template
+
+Publish performance numbers only with enough context to reproduce them:
+
+- Iroha commit, release, and feature flags
+- validator and observer counts
+- consensus mode and Sumeragi parameters
+- collector `k`, redundant send `r`, and topology fanout
+- telemetry profile
+- hardware, storage, and OS details
+- network RTT, jitter, loss, and bandwidth assumptions
+- transaction mix and payload sizes
+- offered TPS and run duration
+- accepted/rejected TPS
+- p50/p95/p99 commit latency
+- queue depth and saturation
+- view changes, dropped messages, RBC pressure, and missing-payload counters
+- CPU, memory, disk, and network utilization per validator
+
+Without these details, a TPS number should be treated as anecdotal.
+
+## Related Pages
+
+- [Chaos Testing with Izanami](./chaos-testing.md)
+- [Torii endpoints](../../reference/torii-endpoints.md)
+- [Operate Iroha 3 via CLI](../../get-started/operate-iroha-2-via-cli.md)
+- [Peer configuration reference](../../reference/peer-config/params.md)
diff --git a/src/guide/advanced/running-iroha-on-bare-metal.md b/src/guide/advanced/running-iroha-on-bare-metal.md
new file mode 100644
index 000000000..5410d9ab2
--- /dev/null
+++ b/src/guide/advanced/running-iroha-on-bare-metal.md
@@ -0,0 +1,83 @@
+# Running Iroha on Bare Metal
+
+Use this workflow when you want to run peers directly on hosts instead of
+through Docker Compose. The current source tree provides Kagami generators that
+write matching genesis, peer configs, client config, and start/stop scripts.
+
+## 1. Build the Binaries
+
+From the upstream Iroha workspace:
+
+```bash
+cargo build --release -p irohad -p iroha_cli -p iroha_kagami
+```
+
+This produces:
+
+- `target/release/irohad` for the peer daemon
+- `target/release/iroha` for the CLI
+- `target/release/kagami` for key, genesis, and localnet generation
+
+The same workspace also builds `iroha2`/`iroha2d` and `iroha3`/`iroha3d`
+aliases when scripts need to make the selected track explicit.
+
+## 2. Generate a Local Network
+
+Generate a four-peer Iroha 3 localnet:
+
+```bash
+target/release/kagami localnet --build-line iroha3 --peers 4 --out-dir ./localnet
+```
+
+For an Iroha 2-style localnet, set the build line explicitly:
+
+```bash
+target/release/kagami localnet --build-line iroha2 --peers 4 --out-dir ./localnet-iroha2
+```
+
+The output directory contains the generated `genesis.json`,
+`genesis.signed.nrt`, peer `config.toml` files, `client.toml`, helper scripts,
+and a generated `README.md` with exact commands for that bundle.
+
+## 3. Start Peers
+
+For a generated disposable localnet, use the generated script:
+
+```bash
+./localnet/start.sh
+```
+
+If you need to wire each peer into a process manager such as systemd, use the
+launch command recorded in `./localnet/README.md` for each peer. Keep each
+peer's `config.toml`, private key, storage directory, and ports separate.
+
+## 4. Operate the Network
+
+Use the generated client config:
+
+```bash
+target/release/iroha --config ./localnet/client.toml ledger domain list all
+target/release/iroha --config ./localnet/client.toml --output-format text ops sumeragi status
+```
+
+Stop the generated localnet with:
+
+```bash
+./localnet/stop.sh
+```
+
+## 5. Production Notes
+
+- Generate fresh private keys for production and store them outside the
+ repository.
+- Make every peer agree on the same signed genesis transaction, topology,
+ trusted peers, and validator PoPs.
+- Bind listener addresses to host-local interfaces only when the peer should
+ not be reachable from other machines.
+- Use a reverse proxy or firewall for Torii exposure, basic auth, TLS, and rate
+ limiting.
+- Treat changes to genesis or consensus topology as coordinated migrations, not
+ single-peer file edits.
+
+For containerized local development, use the [Launch Iroha 3](../../get-started/launch-iroha-2.md)
+Docker Compose workflow.
diff --git a/src/guide/best-practices/application-development.md b/src/guide/best-practices/application-development.md
new file mode 100644
index 000000000..f807e5c4b
--- /dev/null
+++ b/src/guide/best-practices/application-development.md
@@ -0,0 +1,79 @@
+# Application Development
+
+Iroha applications should make transaction behavior explicit, keep signing
+state contained, and use queries and events in ways that are easy to
+observe in production.
+
+## Client Setup
+
+- Store client configuration outside application source code. Load the
+ chain ID, Torii URL, signing account, and transaction settings from
+ environment-specific config.
+- Keep `client.toml` files separate for localnet, Taira, Minamoto, and
+ private networks. A copied testnet signer should never become a mainnet
+ signer.
+- Set transaction lifetimes and status timeouts deliberately. A very short
+ lifetime can expire under normal network jitter, while a very long one
+ can make duplicate submissions harder to reason about.
+- Use `nonce = true` only when repeated transactions should have distinct
+ hashes. For idempotent business operations, store and reuse an
+ application request ID so retries are traceable.
+
+See [Client Configuration](/guide/configure/client-configuration.md) for
+the current TOML fields.
+
+## Transactions
+
+- Build transactions from typed SDK instructions where possible instead of
+ raw JSON or string-assembled payloads.
+- Preflight important writes with read-only queries: account existence,
+ asset balances, permission state, fee asset availability, and target
+ object state.
+- Record the transaction hash, authority account, instruction summary, and
+ expected state change before submitting.
+- Treat `Rejected`, `Expired`, and timeout outcomes differently. A timeout
+ means the client did not observe a final status; it does not prove that
+ the network ignored the transaction.
+- After a successful write, verify the resulting state with a query or
+ event checkpoint that matches the business operation.
+
+For transaction mechanics, see [Transactions](/blockchain/transactions.md).
+
+## Queries and Events
+
+- Use queries for current state and event streams for change notifications.
+ Avoid replacing event handling with repeated broad queries.
+- Paginate broad iterable queries such as account, asset, and block
+ listings.
+- Prefer narrow filters for subscriptions and triggers. Broad filters are
+ useful for diagnostics but can add unnecessary execution and client-side
+ processing.
+- Keep read-only smoke checks separate from signed transaction tests so
+ endpoint availability is easier to diagnose.
+
+See [Queries](/blockchain/queries.md), [Events](/blockchain/events.md), and
+[Filters](/blockchain/filters.md).
+
+## Agent-Assisted Development
+
+- Let agents inspect docs, SDK code, and read-only network state before
+ asking them to write transaction code.
+- Keep live-network tests opt-in behind an environment flag such as
+ `TAIRA_LIVE=1`.
+- Do not paste private keys, account recovery material, API tokens, or
+ forwarded auth headers into prompts.
+- Require a transaction plan before any agent submits a live testnet
+ transaction. The plan should name the network, authority, instructions,
+ fee asset, preflight reads, expected result, and retry behavior.
+
+For the Taira MCP workflow, see
+[Build on SORA 3: Taira and Minamoto](/get-started/sora-nexus-dataspaces.md#taira-mcp-for-agents).
+
+## SDK Hygiene
+
+- Pin SDK and binary versions together using the
+ [Compatibility Matrix](/reference/compatibility-matrix.md).
+- Keep generated client code, snippets, and examples synchronized with the
+ upstream workspace rather than copying older Iroha 2 examples forward.
+- Add unit tests for transaction-building code and integration tests for
+ the smallest read and write paths your application depends on.
diff --git a/src/guide/best-practices/data-modeling.md b/src/guide/best-practices/data-modeling.md
new file mode 100644
index 000000000..8ebe8459b
--- /dev/null
+++ b/src/guide/best-practices/data-modeling.md
@@ -0,0 +1,75 @@
+# Data Modeling
+
+Ledger data should be modeled around ownership, transfer behavior,
+permission boundaries, and query patterns. Choose the smallest on-chain
+representation that can support auditability and deterministic execution.
+
+## Domains and Accounts
+
+- Use domains to represent administrative and policy boundaries. Keep
+ domain names stable because they appear in account and asset identifiers.
+- Avoid overloading a single account with unrelated responsibilities. Use
+ separate accounts for users, services, triggers, operators, and fee
+ sponsors.
+- Use canonical account and domain identifiers in config and tests. Iroha
+ names are case-sensitive after canonical parsing.
+- Keep test and production identities visibly distinct in names, domains,
+ and configuration file paths.
+
+See [Domains](/blockchain/domains.md), [Accounts](/blockchain/accounts.md),
+and [Naming](/reference/naming.md).
+
+## Assets and NFTs
+
+- Use numeric assets for fungible balances and transferable quantities.
+- Use NFTs or domain-specific objects for uniquely owned records.
+- Avoid encoding value-bearing state only in metadata. Assets and NFTs
+ provide lifecycle events, transfer semantics, and permission checks that
+ metadata does not.
+- Define precision, supply policy, issuer responsibility, and burn/mint
+ authority before exposing an asset to applications.
+
+See [Assets](/blockchain/assets.md), [NFTs](/blockchain/nfts.md), and
+[RWAs](/blockchain/rwas.md).
+
+## Metadata
+
+- Use metadata for compact attributes of ledger objects, such as labels,
+ integration IDs, policy flags, hashes, URIs, or content-addressed
+ references.
+- Keep metadata keys stable and documented. Changing key names after
+ clients depend on them creates a migration problem.
+- Do not store large documents, logs, private user data, or high-churn
+ application state directly in metadata.
+- When metadata points to off-chain data, store a verifiable reference such
+ as a content hash, URI, SoraFS path, manifest reference, or compact
+ commitment.
+
+See
+[Metadata and Ledger Storage Choices](/guide/configure/metadata-and-store-assets.md)
+and [Metadata](/blockchain/metadata.md).
+
+## Permissions by Model
+
+- Design roles around business operations, not around implementation
+ conveniences. A role named after a job or service is easier to audit than
+ a role named after a broad technical capability.
+- Scope permission tokens to the smallest object that satisfies the
+ workflow.
+- Treat permissions for minting, burning, peer management, executor
+ changes, trigger management, and metadata mutation as high-impact
+ permissions.
+- Add explicit revocation and rotation procedures for temporary
+ permissions.
+
+See [Permissions](/blockchain/permissions.md) and
+[Permission Tokens](/reference/permissions.md).
+
+## Query Shape
+
+- Choose identifiers and metadata keys that support the queries your
+ application will need most often.
+- Paginate broad result sets and avoid user interfaces that require
+ unrestricted ledger-wide scans for normal actions.
+- Keep off-chain indexes reconstructible from ledger data and events
+ whenever they are used for critical application behavior.
diff --git a/src/guide/best-practices/index.md b/src/guide/best-practices/index.md
new file mode 100644
index 000000000..1e167b6c0
--- /dev/null
+++ b/src/guide/best-practices/index.md
@@ -0,0 +1,43 @@
+# Best Practices
+
+This section collects production-oriented guidance for Iroha applications
+and networks. It is organized by the decision you need to make, not by the
+feature that happens to implement it.
+
+Use it as a checklist before a shared testnet rehearsal, a production
+launch, or a major client release.
+
+## Categories
+
+| Category | Focus |
+| ------------------------------------------------------- | ------------------------------------------------------------------------------------------------------ |
+| [Application Development](./application-development.md) | Client configuration, transaction submission, retries, events, queries, and agent-assisted development |
+| [Data Modeling](./data-modeling.md) | Domains, accounts, assets, NFTs, metadata, off-chain data, and naming conventions |
+| [Network Deployment](./network-deployment.md) | Genesis, topology, peer keys, Torii exposure, consensus settings, and environment separation |
+| [Operations](./operations.md) | Observability, runbooks, backups, change management, capacity checks, and incident handling |
+| [Security and Access](./security-and-access.md) | Secret handling, permissions, technical accounts, network access, and audit trails |
+| [Release Readiness](./release-readiness.md) | Localnet, Taira, Minamoto, compatibility checks, live-network safeguards, and rollback planning |
+
+## Cross-Cutting Rules
+
+- Keep local development, shared testnet, and production configuration
+ separate.
+- Treat genesis, peer topology, executor policy, and key material as
+ controlled deployment artifacts.
+- Model durable ledger state intentionally. Do not use metadata as a
+ dumping ground for large, private, or high-churn data.
+- Submit transactions through idempotent workflows that can handle
+ rejection, expiry, retries, and delayed status.
+- Prefer narrow permissions, dedicated technical accounts, and explicit
+ operational runbooks over broad administrator access.
+- Prove behavior on a disposable local network first, then rehearse on
+ Taira or another shared testnet before any mainnet operation.
+
+## Related References
+
+- [Configuration and Management](/guide/configure/overview.md)
+- [Security](/guide/security/)
+- [Performance and Metrics](/guide/advanced/metrics.md)
+- [Compatibility Matrix](/reference/compatibility-matrix.md)
+- [Torii Endpoints](/reference/torii-endpoints.md)
+- [Permission Tokens](/reference/permissions.md)
diff --git a/src/guide/best-practices/network-deployment.md b/src/guide/best-practices/network-deployment.md
new file mode 100644
index 000000000..0e59ec427
--- /dev/null
+++ b/src/guide/best-practices/network-deployment.md
@@ -0,0 +1,74 @@
+# Network Deployment
+
+Treat an Iroha network as a coordinated system. Validators must agree on
+genesis, topology, trusted peers, and consensus-relevant configuration
+before the network can start and keep finalizing blocks.
+
+## Environment Separation
+
+- Maintain separate config bundles for local development, shared testnet,
+ staging, and production.
+- Generate fresh keys for every non-disposable environment. Do not reuse
+ localnet or Taira key material in production.
+- Keep peer config, client config, signed genesis, scripts, and deployment
+ notes together as a versioned release artifact.
+- Store private keys outside repositories and deployment templates.
+
+See
+[Keys for Network Deployment](/guide/configure/keys-for-network-deployment.md).
+
+## Genesis and Topology
+
+- Make every validator use the same signed genesis transaction, trusted
+ peer set, topology, and validator Proofs-of-Possession when the profile
+ requires them.
+- Use at least four validators for a minimum Byzantine-fault-tolerant
+ deployment.
+- Separate validators from observers in capacity planning. Observers do not
+ vote, propose, or collect, but they still consume storage, block sync,
+ and network bandwidth.
+- Treat genesis, executor, and topology changes as coordinated migrations
+ rather than single-peer edits.
+
+See [Genesis](/reference/genesis.md),
+[Peer Management](/guide/configure/peer-management.md), and
+[Performance and Metrics](/guide/advanced/metrics.md#node-count-and-quorum).
+
+## Torii and Network Access
+
+- Put Torii behind a reverse proxy or firewall when it is exposed outside
+ the host or private network.
+- Terminate TLS and apply basic authentication, rate limiting, and
+ request-size controls at the edge when the deployment requires them.
+- Publish only the endpoints needed by the environment. Operator and
+ telemetry routes should be more restricted than public read-only routes.
+- Bind listener addresses to host-local interfaces when peers should not
+ accept remote traffic directly.
+
+See [Torii Endpoints](/reference/torii-endpoints.md) and
+[Virtual Private Networks](/guide/security/vpn.md).
+
+## Consensus and Capacity
+
+- Measure the deployment before tuning consensus timers. Lower timeouts can
+ reduce latency only while network, storage, and execution layers keep up.
+- Watch queue direction, not just short samples of throughput. A queue that
+ grows during steady load means the network is overloaded.
+- Record effective Sumeragi parameters, telemetry profile, validator count,
+ network RTT, workload shape, and hardware details for each benchmark.
+- Increase collector fanout only after comparing latency, traffic, and
+ backpressure signals.
+
+See [Performance and Metrics](/guide/advanced/metrics.md).
+
+## Bare-Metal and Process Management
+
+- Keep each peer's `config.toml`, private key, storage directory, and ports
+ separate.
+- Use process managers such as systemd with explicit restart, logging, and
+ resource policies.
+- Preserve generated README and start commands from Kagami localnet bundles
+ when translating a test topology to managed hosts.
+
+See
+[Running Iroha on Bare Metal](/guide/advanced/running-iroha-on-bare-metal.md).
diff --git a/src/guide/best-practices/operations.md b/src/guide/best-practices/operations.md
new file mode 100644
index 000000000..6ac02469d
--- /dev/null
+++ b/src/guide/best-practices/operations.md
@@ -0,0 +1,67 @@
+# Operations
+
+Operational readiness means that the network can be observed, changed,
+backed up, and recovered without relying on improvised access to validator
+hosts.
+
+## Observability
+
+- Enable telemetry profiles intentionally. Use `extended` when `/metrics`
+ is needed and `full` during test runs that need detailed Sumeragi
+ operator routes.
+- Dashboard accepted throughput, rejected throughput, commit latency, queue
+ depth, queue saturation, view changes, dropped consensus messages, and
+ storage pressure.
+- Keep status snapshots, metrics scrapes, logs, and deployment
+ configuration in the same incident or benchmark artifact set.
+- Alert on sustained queue growth, unexpected rejection spikes, stalled
+ block height, view-change churn, and peer health changes.
+
+See [Performance and Metrics](/guide/advanced/metrics.md).
+
+## Runbooks
+
+- Write runbooks for peer restart, Torii degradation, key compromise,
+ permission mistakes, fee sponsor depletion, stuck queues, and network
+ partition symptoms.
+- Include exact read-only checks before write operations, especially for
+ peer registration, permission grants, and parameter changes.
+- Keep emergency contacts and escalation rules outside the docs repo if
+ they include private operational data.
+- Review runbooks after every incident, rehearsal, or major upgrade.
+
+See [Operational Security](/guide/security/operational-security.md).
+
+## Backups and Recovery
+
+- Back up peer storage according to the recovery point required by the
+ deployment. Validate restores on non-production hosts.
+- Keep signed genesis, release metadata, peer config, and key custody
+ records recoverable even if a validator host is unavailable.
+- Document whether a recovery procedure rebuilds from genesis, restores
+ from a snapshot, or replaces a failed peer with a new identity.
+- Never test restore procedures for the first time during a production
+ incident.
+
+## Change Management
+
+- Treat on-chain configuration changes as transactions that require review,
+ preflight reads, authorization, and post-change verification.
+- Roll out peer binary upgrades with a compatibility plan and a rollback
+ decision point.
+- Avoid changing peer topology, consensus timing, and application workload
+ in the same maintenance window unless the migration plan requires it.
+- Record the transaction hashes and block heights for operational changes.
+
+See [Hot Reload](/guide/advanced/hot-reload.md) and
+[Compatibility Matrix](/reference/compatibility-matrix.md).
+
+## Capacity Reviews
+
+- Re-run load checks when validator count, hardware, network placement,
+ workload mix, or consensus parameters change.
+- Measure warm-up, steady state, and expected peak load rather than relying
+ on a short best-case throughput sample.
+- Compare accepted throughput with committed throughput and queue depth. If
+ submitted TPS exceeds committed TPS and queues grow, the network is past
+ its sustainable envelope.
diff --git a/src/guide/best-practices/release-readiness.md b/src/guide/best-practices/release-readiness.md
new file mode 100644
index 000000000..eb7e7fa89
--- /dev/null
+++ b/src/guide/best-practices/release-readiness.md
@@ -0,0 +1,67 @@
+# Release Readiness
+
+Before promoting an Iroha application or network change, prove the behavior
+in the smallest environment that can expose the relevant risk, then move
+through shared testnet and production gates deliberately.
+
+## Localnet Gate
+
+- Launch a disposable local network with the same Iroha track and the
+ closest practical validator count.
+- Run unit tests for transaction builders, query parsing, rejection
+ handling, and config loading.
+- Exercise the smallest successful read and write paths through the same
+ SDK or CLI shape the application will use later.
+- Capture expected transaction hashes, statuses, events, and state reads in
+ test artifacts.
+
+See [Launch Iroha 3](/get-started/launch-iroha-2.md) and
+[SDK Tutorials](/guide/tutorials/).
+
+## Shared Testnet Gate
+
+- Use Taira or another shared testnet for endpoint behavior, fees, account
+ funding, latency, and operational rehearsals.
+- Keep live testnet writes opt-in so ordinary test runs do not depend on
+ network availability or spend testnet funds.
+- Verify signer funding, fee asset metadata, authority permissions, and
+ expected state before submitting each live test transaction.
+- Wait for a terminal status, then verify the resulting state with a
+ read-only query.
+
+See
+[Build on SORA 3: Taira and Minamoto](/get-started/sora-nexus-dataspaces.md).
+
+## Mainnet or Production Gate
+
+- Use separate production signers, funding, domains, and config paths. Do
+ not promote testnet keys or faucet assumptions.
+- Confirm SDK, CLI, peer, and network compatibility with the
+ [Compatibility Matrix](/reference/compatibility-matrix.md).
+- Review permissions, fee sponsorship, rate limits, monitoring, backup
+ status, and rollback criteria before the release window.
+- Require a written transaction or migration plan for high-impact writes.
+
+## Rollback and Recovery
+
+- Define which changes can be rolled back by code deploy, which require an
+ on-chain transaction, and which cannot be undone directly.
+- For on-chain data changes, prepare compensating transactions or migration
+ scripts before the first production write.
+- For network changes, keep the previous binary, config bundle, signed
+ genesis, and operational runbook available during the release.
+- Set a decision point for aborting the rollout based on objective signals
+ such as rejection rate, queue growth, latency, or peer health.
+
+## Final Checklist
+
+- Configuration is environment-specific and does not contain test-only
+ secrets.
+- Transaction retry behavior is idempotent or explicitly bounded.
+- The application can distinguish rejection, expiry, timeout, and endpoint
+ availability failures.
+- Monitoring covers throughput, latency, queue depth, rejections, view
+ changes, and relevant business events.
+- Operators have runbooks for expected failure modes.
+- Security review covered key custody, permissions, network exposure, and
+ automation authority.
diff --git a/src/guide/best-practices/security-and-access.md b/src/guide/best-practices/security-and-access.md
new file mode 100644
index 000000000..abc63e561
--- /dev/null
+++ b/src/guide/best-practices/security-and-access.md
@@ -0,0 +1,74 @@
+# Security and Access
+
+Security practice in Iroha should be based on narrow authority, controlled
+key custody, explicit network exposure, and auditable changes.
+
+## Key Custody
+
+- Generate production keys with production-grade entropy and store private
+ keys outside repositories, issue trackers, prompts, chat logs, and CI
+ output.
+- Use separate key material for clients, peers, genesis signing,
+ validators, fee sponsors, and technical accounts.
+- Rotate keys according to a written process and rehearse recovery before a
+ live incident.
+- Use hardware-backed or operating-system-backed storage for high-value
+ signing keys when the deployment risk justifies it.
+
+See
+[Generating Cryptographic Keys](/guide/security/generating-cryptographic-keys.md)
+and
+[Storing Cryptographic Keys](/guide/security/storing-cryptographic-keys.md).
+
+## Permissions
+
+- Grant the smallest permission token or role that supports the workflow.
+- Prefer dedicated technical accounts for services, triggers, agents, and
+ automation. Avoid running long-lived automation through a personal
+ operator account.
+- Review permissions for peer management, metadata mutation, minting,
+ burning, trigger registration, executor changes, and SORA/Nexus
+ governance before production launch.
+- Revoke temporary permissions after the maintenance window or migration
+ that required them.
+
+See [Permissions](/blockchain/permissions.md) and
+[Permission Tokens](/reference/permissions.md).
+
+## Network Exposure
+
+- Restrict peer-to-peer, Torii, telemetry, and operator routes according to
+ the environment. Public read access does not imply public write or
+ operator access.
+- Use VPNs, firewalls, reverse proxies, TLS termination, and rate limits
+ where appropriate for the deployment.
+- Keep basic-auth credentials, proxy tokens, and forwarded headers out of
+ committed config.
+- Test that unauthorized clients cannot reach restricted routes.
+
+See [Virtual Private Networks](/guide/security/vpn.md) and
+[Torii Endpoints](/reference/torii-endpoints.md).
+
+## Fraud and Abuse Monitoring
+
+- Monitor ledger events and operational signals for unexpected asset
+ movement, permission grants, trigger changes, peer changes, and repeated
+ rejected transactions.
+- Preserve evidence with transaction hashes, block heights, event records,
+ logs, and status snapshots.
+- Route alerts to the security, operations, and business owners responsible
+ for the affected assets or workflows.
+
+See [Fraud Monitoring](/guide/security/fraud-monitoring.md).
+
+## Agent and Automation Guardrails
+
+- Start automation with read-only permissions and add write authority only
+ after the workflow is reviewed.
+- Require explicit human approval for live-network mutations unless the
+ automation is a deliberately deployed production service.
+- Do not expose private keys to agent prompts. Use local code that loads
+ secrets from environment variables, keychains, hardware signers, or
+ ignored config files.
+- Log automation decisions in a way that supports audits without leaking
+ secret material.
diff --git a/src/guide/configure/client-configuration.md b/src/guide/configure/client-configuration.md
new file mode 100644
index 000000000..a2df3dae6
--- /dev/null
+++ b/src/guide/configure/client-configuration.md
@@ -0,0 +1,99 @@
+# Client Configuration
+
+Iroha CLI and SDK clients use TOML configuration. The repository ships the
+current default at `defaults/client.toml`; generated local networks also write a
+matching `client.toml` into their output directory.
+
+::: details Client configuration template
+
+<<< @/snippets/client.template.toml
+
+:::
+
+## Core Fields
+
+At minimum, a client configuration identifies the chain, Torii endpoint, and
+signing account:
+
+```toml
+chain = "00000000-0000-0000-0000-000000000000"
+torii_url = "http://127.0.0.1:8080"
+
+[account]
+domain = "wonderland.universal"
+public_key = "ed0120..."
+private_key = "802620..."
+```
+
+- `chain` selects the chain to which submitted transactions belong.
+- `torii_url` points at the peer Torii HTTP API.
+- `[account].domain` is used by CLI shortcuts and address-selector encoding;
+ the canonical `AccountId` itself is domainless.
+- `[account].public_key` and `[account].private_key` sign transactions.
+
+The account must already exist on-chain. For the default local network this is
+handled by the bundled genesis manifest.
+
+::: info Case sensitivity
+
+Iroha names are case-sensitive after canonical parsing. For example,
+`wonderland.universal`, `Wonderland.universal`, and
+`looking_glass.universal` are distinct domain literals.
+
+:::
+
+## Basic Authentication
+
+The optional `[basic_auth]` section adds an HTTP `Authorization` header to
+client requests. Iroha peers do not interpret these credentials directly; use
+them when Torii is behind a reverse proxy such as Nginx.
+
+```toml
+[basic_auth]
+web_login = "mad_hatter"
+password = "ilovetea"
+```
+
+## Transaction Settings
+
+Transaction behavior is configured with the `[transaction]` section:
+
+```toml
+[transaction]
+time_to_live_ms = 100000
+status_timeout_ms = 15000
+nonce = false
+```
+
+- `time_to_live_ms` is the transaction lifetime in milliseconds.
+- `status_timeout_ms` controls how long the client waits for transaction
+ status.
+- `nonce = true` asks the client to include a nonce so repeated transactions
+ produce different hashes.
+
+## Connect Queue Settings
+
+Current Iroha clients can also use the optional `[connect]` section for local
+queue state:
+
+```toml
+[connect]
+queue_root = "./queue"
+```
+
+Use this when a workflow needs durable client-side queue storage.
+
+## Generating Configurations
+
+For disposable local networks, prefer Kagami because it writes configs, genesis,
+scripts, and a README that match the selected Iroha 2 or Iroha 3 profile:
+
+```bash
+cargo run --bin kagami -- localnet --build-line iroha3 --peers 4 --out-dir ./localnet
+```
+
+Use the generated `./localnet/client.toml` with the CLI:
+
+```bash
+cargo run --bin iroha -- --config ./localnet/client.toml ledger domain list all
+```
diff --git a/src/guide/configure/genesis.md b/src/guide/configure/genesis.md
new file mode 100644
index 000000000..5daf83c1e
--- /dev/null
+++ b/src/guide/configure/genesis.md
@@ -0,0 +1,61 @@
+# Genesis
+
+Genesis defines the initial chain state. In the current Iroha 2 and Iroha 3
+codebase, the editable source is a JSON manifest and the node consumes a signed
+Norito transaction file.
+
+::: details Default genesis manifest
+
+<<< @/snippets/genesis.json
+
+:::
+
+## Files
+
+The upstream repository ships a default manifest at `defaults/genesis.json`.
+Kagami-generated networks write their own manifest and signed transaction into
+the output directory:
+
+```bash
+cargo run --bin kagami -- localnet --build-line iroha3 --peers 4 --out-dir ./localnet
+```
+
+The generated `README.md` in that directory records the exact files and launch
+commands for the selected profile.
+
+## Peer Configuration
+
+Peers point at the signed genesis transaction in the `[genesis]` section of
+`config.toml`:
+
+```toml
+[genesis]
+file = "./genesis.signed.nrt"
+public_key = "ed0120..."
+```
+
+All peers in the network must agree on the signed genesis transaction and the
+genesis public key.
+
+## Signing Genesis
+
+If you edit a manifest manually, validate and sign it before starting peers:
+
+```bash
+cargo run --bin kagami -- genesis validate ./genesis.json
+cargo run --bin kagami -- genesis sign ./genesis.json \
+ --private-key "$GENESIS_PRIVATE_KEY_HEX" \
+ --algorithm ed25519 \
+ --out-file ./genesis.signed.nrt
+```
+
+For NPoS or Nexus profiles, include the topology and BLS Proofs-of-Possession
+required by the generated profile. Kagami `localnet`, `wizard`, and profile
+generation commands handle those details automatically.
+
+## Recommitting Genesis
+
+A peer only commits genesis when its storage is empty. To test a new genesis in
+a disposable localnet, stop the peers, remove their generated state directory,
+and start from the new signed genesis. Do not replace genesis on a running
+network unless every validator is coordinating the same migration.
diff --git a/src/guide/configure/keys-for-network-deployment.md b/src/guide/configure/keys-for-network-deployment.md
new file mode 100644
index 000000000..5f0edf4fc
--- /dev/null
+++ b/src/guide/configure/keys-for-network-deployment.md
@@ -0,0 +1,62 @@
+# Keys for Network Deployment
+
+Every network needs distinct key material for clients, peers, genesis signing,
+and, for NPoS or Nexus profiles, BLS validator identities.
+
+## Where Keys Are Used
+
+- Client signing keys are stored in `client.toml` under `[account]`.
+- Peer identity keys are stored in each peer `config.toml` as `public_key` and
+ `private_key`.
+- Peer discovery uses each peer's public key in `trusted_peers`.
+- BLS validator Proofs-of-Possession are stored in `trusted_peers_pop` for NPoS
+ profiles.
+- Genesis signing uses the `[genesis].public_key` in peer config and the
+ matching private key when signing the manifest.
+
+For local or test deployments, let Kagami generate all of these files together:
+
+```bash
+cargo run --bin kagami -- localnet --build-line iroha3 --peers 4 --out-dir ./localnet
+```
+
+For an existing network or profile, use the guided flow:
+
+```bash
+cargo run --bin kagami -- wizard --profile nexus
+```
+
+## Generate Individual Key Pairs
+
+Use `kagami keys` for standalone key material:
+
+```bash
+cargo run --bin kagami -- keys --algorithm ed25519 --json
+```
+
+For BLS validator material, include a Proof-of-Possession:
+
+```bash
+cargo run --bin kagami -- keys --algorithm bls_normal --pop --json
+```
+
+Use `--seed` only for reproducible development fixtures. For production
+deployment, generate fresh keys and store private keys outside the repository.
+
+## Peer Consistency
+
+All validators must agree on the same genesis transaction, topology, trusted
+peer public keys, and validator PoPs. A single missing or mismatched peer key can
+prevent the network from starting or reaching consensus.
+
+For a minimum Byzantine-fault-tolerant deployment, use at least four peers. Each
+peer must have its own private key, but every peer configuration needs the same
+trusted peer set.
+
+## Client Accounts
+
+The client account in `client.toml` must already exist on-chain. It can be
+registered by the genesis manifest or by a later transaction. Avoid using the
+genesis signing identity as a long-lived application account; genesis privileges
+only apply during the genesis round, and production clients should use their own
+accounts and roles.
diff --git a/src/guide/configure/metadata-and-store-assets.md b/src/guide/configure/metadata-and-store-assets.md
new file mode 100644
index 000000000..f44e33dbd
--- /dev/null
+++ b/src/guide/configure/metadata-and-store-assets.md
@@ -0,0 +1,60 @@
+# Metadata and Ledger Storage Choices
+
+Older Iroha documentation described a separate `Store` asset type for
+arbitrary key-value data. The current data model does not use that asset
+type. Use the following options instead.
+
+## Metadata
+
+Use [metadata](/blockchain/metadata.md) for small JSON fields that belong
+to a ledger object:
+
+- display names and labels
+- integration IDs
+- small policy flags
+- hashes, URIs, CIDs, or SoraFS paths that point to larger payloads
+
+Metadata is part of world state and is returned with the object that owns
+it. Keep keys stable, values compact, and permissions explicit. Do not
+store large documents, logs, or high-churn application state directly in
+metadata.
+
+## Numeric Assets and NFTs
+
+Use [assets](/blockchain/assets.md) and [NFTs](/blockchain/nfts.md) when
+the state is value-bearing:
+
+- numeric assets for fungible balances
+- NFTs for uniquely owned records
+- [RWAs](/blockchain/rwas.md) and other domain-specific objects when the
+ active data model exposes them
+
+Assets and NFTs have their own IDs, lifecycle events, transfer behavior,
+and permission checks. They are better than metadata when ownership,
+scarcity, or transfer history matters.
+
+## Off-Chain Data
+
+Use off-chain storage for large or mutable payloads. Store only a stable
+reference on-chain, such as:
+
+- a content hash
+- a URI
+- a SoraFS path or manifest reference
+- a compact commitment used by an application proof
+
+This keeps the WSV small while still allowing applications to verify that
+the off-chain payload matches the on-chain reference.
+
+## Choosing a Location
+
+Use this rule of thumb:
+
+- If it is a compact attribute of a ledger object, use metadata.
+- If it is value-bearing or transferable, model it as an asset, NFT, or
+ domain-specific object.
+- If it is large, high-churn, or application-private, store it outside the
+ WSV and put a verifiable reference on-chain.
+
+For metadata permissions, see
+[Permission Tokens](/reference/permissions.md).
diff --git a/src/guide/configure/modes.md b/src/guide/configure/modes.md
new file mode 100644
index 000000000..41bc9def8
--- /dev/null
+++ b/src/guide/configure/modes.md
@@ -0,0 +1,90 @@
+# Public and Private Blockchains
+
+Iroha can run in a variety of configurations. As the administrator of your
+own network, you decide which executor and permission policy determine
+whether a transaction is accepted.
+
+The common profiles are _private_ permissioned networks and more open
+_public_ networks. Both are configured through genesis state and executor
+policy, not through separate node binaries.
+
+Below we outline the major differences in these two use cases.
+
+## Permissions
+
+In a _public_ blockchain, most accounts have the same set of permissions.
+In a _private_ blockchain, most accounts are assumed not to be able to do
+anything outside the authority granted to them unless explicitly granted
+the relevant permission.
+
+::: info
+
+Refer to the
+[dedicated section on permissions](/blockchain/permissions.md) for
+more details.
+
+:::
+
+## Peers
+
+In a _public_ blockchain, peer admission is part of chain policy. For a
+_private_ blockchain, deployments usually pin the trusted peer set in
+configuration and genesis.
+
+::: info
+
+Refer to [peer management](peer-management.md) for more details.
+
+:::
+
+## Registering accounts
+
+Depending on how you decide to set up your
+[genesis block (`genesis.json`)](genesis.md), the process for registering
+an account might go one of two ways. To understand why, let's talk about
+permission first.
+
+The selected executor defines which permission checks apply. You can grant
+the default [permission tokens](/blockchain/permissions.md) in genesis to
+shape a private, administrator-managed network or a more open network.
+Once those permissions are active, the process of registering accounts is
+different.
+
+When it comes to registering accounts, public and private blockchain have
+the following differences:
+
+- In a _public_ blockchain, anyone should be able to register an
+ account[^1]. So, in theory, all that you need is a suitable client, a way
+ to generate a private key for a supported algorithm, and permission
+ policy that accepts the registration.
+
+- In a _private_ blockchain, you can have _any_ process for setting up an
+ account: it could be that the registering instruction has to be submitted
+ by a specific account, or by a smart contract that asks for other
+ details. It could be that in a private blockchain registering new
+ accounts is only possible on specific dates, or limited by a non-mintable
+ (finite) token.
+
+- In a _typical_ private blockchain, i.e. a blockchain without any unique
+ processes for registering accounts, you need an account to register
+ another account.
+
+The default permission validators cover the typical private blockchain
+use case.
+
+::: info
+
+Public and private modes are policy profiles rather than separate node
+binaries. Review the executor and genesis permissions you ship before
+running an open network.
+
+:::
+
+Refer to the section on
+[instructions](/blockchain/instructions.md#un-register) for more
+details about `Register` instructions.
+
+[^1]:
+ Current account IDs are canonical and derive from the account
+ controller. The docs still use "register an account" when describing
+ the `Register` instruction.
diff --git a/src/guide/configure/overview.md b/src/guide/configure/overview.md
new file mode 100644
index 000000000..1376600fa
--- /dev/null
+++ b/src/guide/configure/overview.md
@@ -0,0 +1,20 @@
+# Configuration and Management
+
+Iroha configuration has two layers:
+
+- **local peer and client configuration**, stored in TOML files or environment
+ variables and read at process startup
+- **on-chain configuration**, changed by transactions through
+ [`SetParameter`](/blockchain/instructions.md#setparameter)
+
+Use local configuration for node identity, addresses, logging, storage, and
+client signing keys. Use on-chain configuration for values that must be agreed
+by the network and replayed deterministically.
+
+The main configuration entry points are:
+
+- [Genesis](/guide/configure/genesis.md)
+- [Client configuration](/guide/configure/client-configuration.md)
+- [Keys for network deployment](/guide/configure/keys-for-network-deployment.md)
+- [Running on bare metal](/guide/advanced/running-iroha-on-bare-metal.md)
+- [Peer configuration reference](/reference/peer-config/index.md)
diff --git a/src/guide/configure/peer-management.md b/src/guide/configure/peer-management.md
new file mode 100644
index 000000000..ce00ffe76
--- /dev/null
+++ b/src/guide/configure/peer-management.md
@@ -0,0 +1,82 @@
+# Peer Management
+
+If you followed any of the language-specific guides, you now have a
+well-functioning network that people will want to join.
+
+## Public Blockchain
+
+In an open network, peer admission is still a chain policy decision. A node
+can run the correct software and connect to Torii, but it only participates
+in consensus after the network admits its peer identity.
+
+## Private Blockchain
+
+In a bank setting, allowing everyone to join at their leisure is a security
+risk. For safety, private Iroha deployments usually pin the peer topology in
+configuration and genesis instead of relying on open discovery.
+
+### Registering peers
+
+To add a peer to the network, it must be manually registered. Let's discuss
+the steps that should be taken in order to complete this process.
+
+#### 1. Grant the user permissions
+
+The user that registers the peer must have the appropriate
+`PermissionToken`. This could be granted as part of a `role`, or as part of
+a one-time allowance.
+
+How to decide if you need to grant a role? Granting roles makes sense if a
+user is to serve as an administrator of sorts, where it's their
+responsibility to maintain the peers in the network long-term. A one-time
+permission grant is useful when the party registering the peer isn't
+responsible for registering peers in general, but the network administrator
+doesn't need to (or want to) spend time setting up a new peer.
+
+::: info
+
+The default executor uses the `CanManagePeers` permission token for
+registering and unregistering peers.
+
+:::
+
+We discuss permissions and roles with more detail in a
+[separate chapter](/blockchain/permissions.md).
+
+#### 2. Set up a peer
+
+After a new peer was granted permissions, it must be set up.
+
+It's a good idea to request information about the peers' configuration in
+the network. Torii exposes node parameter and capability endpoints for this.
+Thus far querying is done manually. Until the
+[bootstrapping procedure](https://github.com/hyperledger-iroha/iroha/issues/1184 '#1184')
+is implemented, you'll have to manually check that the timeouts and batch
+sizes match.
+
+To simplify the process, you can ask the network administrator for a
+redacted version of `config.toml`, which excludes privileged information,
+such as peer private keys.
+
+#### 3. Submit the instruction
+
+_After_ your peer is running, you should submit the _register peer_
+instruction. The peer will go through the handshake process and start
+chatting with the network.
+
+::: tip
+
+Submitting a peer registration instruction **does not** (and cannot)
+instantiate a _new peer process_.
+
+:::
+
+### Unregistering peers
+
+What about unregistering peers? For security reasons this process is
+one-sided. The network reaches consensus that it wants to remove a peer,
+but the peer itself doesn't know much about why nobody's talking to it.
+
+In most circumstances, if you want to unregister a peer, you want to do so
+because it is a Byzantine fault. Just "ghosting" this peer makes the life
+of the malicious actor on the network harder.
diff --git a/src/guide/diagrams-src/ffi.tex b/src/guide/diagrams-src/ffi.tex
new file mode 100644
index 000000000..c15eee592
--- /dev/null
+++ b/src/guide/diagrams-src/ffi.tex
@@ -0,0 +1,73 @@
+\documentclass[tikz,border={0 1}]{standalone}
+% \usepackage{dejavu}
+\usepackage[scaled=0.9]{DejaVuSansMono}
+% \usepackage{stackrel}
+
+\usetikzlibrary{patterns,decorations.markings,backgrounds}
+\usetikzlibrary{decorations.pathreplacing}
+
+\begin{document}
+\begin{tikzpicture}[thick]
+
+
+\coordinate (A_l) at (0, 0); \coordinate (A_r) at (8, 0);
+\coordinate (B_l) at (0.5, 2); \coordinate (B_r) at (8, 2);
+\coordinate (C_l) at (1, 4); \coordinate (C_r) at (8, 4);
+\coordinate (D_l) at (0.8, 6); \coordinate (D_r) at (8, 6);
+\coordinate (E_l) at (0.5, 8); \coordinate (E_r) at (8, 8);
+\coordinate (F_l) at (0, 10); \coordinate (F_r) at (8, 10);
+
+
+\node[shift={(0.65,0.3)}, font=\sffamily\bfseries] at (F_l) {START};
+\node[shift={(0.65,0.3)}, font=\sffamily\bfseries] at (E_l) {START};
+\node[shift={(0.65,0.3)}, font=\sffamily\bfseries] at (D_l) {START};
+
+\node[shift={(0.45,-0.3)}, font=\sffamily\bfseries] at (A_l) {END};
+\node[shift={(0.45,0.3)}, font=\sffamily\bfseries] at (B_l) {END};
+\node[shift={(0.45,0.3)}, font=\sffamily\bfseries] at (C_l) {END};
+
+\node[shift={(-1.7,0.3)}, font=\sffamily] at (F_r) {\texttt{rust::Domain::new}};
+\node[shift={(-1.2,0.3)}, font=\sffamily] at (E_r) {\texttt{Domain\_\_new}};
+\node[shift={(-1.2,0.3)}, font=\sffamily] at (D_r) {\texttt{Domain\_\_new}};
+
+\draw (A_l) -- (A_r);
+\draw (B_l) -- (B_r);
+\draw (C_l) -- (C_r);
+\draw (D_l) -- (D_r);
+\draw (E_l) -- (E_r);
+\draw (F_l) -- (F_r);
+
+% Top arrow
+\draw [latex-](2.,8.25) -- (2.,9.75);
+% 2nd top arrow
+\draw [latex-](2.75,6.25) -- (2.75,7.75);
+\draw [decorate,decoration={brace,amplitude=10pt},xshift=-4pt,yshift=0pt]
+(2.5,6.25) -- (2.5,7.75) node [black,midway,xshift=-1.5cm, font=\sffamily]
+{CONTEXT 1};
+% 3rd top arrow
+\draw [latex-](1.6,2.25) -- (1.6,3.75);
+% 4rd top arrow
+\draw [latex-](1.5,0.25) -- (1.5,1.75);
+% 4rd brace
+\draw [decorate,decoration={brace,amplitude=10pt},xshift=-4pt,yshift=0pt]
+(1.25,0.25) -- (1.25,1.75) node [black,midway,xshift=-1.5cm, font=\sffamily]
+{CONTEXT 2};
+
+\definecolor{grey}{rgb}{0.7,0.7,0.7}
+\path[
+ pattern=north east lines,
+ pattern color=grey
+] (2,4) rectangle (7.8,6);
+
+\node[text width=2cm, font=\sffamily] at (3.4,9.0) {\texttt{Into FFI \\ As Repr C}};
+
+\node[font=\sffamily] at (4.4,6.95) {Try From Repr C};
+
+\node[text width=3cm, text centered, font=\sffamily\bfseries] at (4.8,5.0) {\texttt{METHOD \\ CALL}};
+
+\node[text width=2cm, font=\sffamily] at (3.0,3.0) {\texttt{Into FFI \\ Output::write}};
+
+\node[font=\sffamily] at (3.4,1) {\texttt{Try From Repr C}};
+
+\end{tikzpicture}
+\end{document}
diff --git a/src/guide/index.md b/src/guide/index.md
new file mode 100644
index 000000000..924559920
--- /dev/null
+++ b/src/guide/index.md
@@ -0,0 +1,34 @@
+# Guide
+
+Use this section when you are building, operating, or integrating with
+Iroha. Start with the SDK tutorials for a first client, then move to the
+best practices and operator references before deploying against a shared
+network.
+
+## Sections
+
+| Section | Use it for |
+| ------------------------------------------------------------ | --------------------------------------------------------------------------------------------------------------------------- |
+| [SDK Tutorials](/guide/tutorials/) | Language-specific client setup and sample applications |
+| [Best Practices](/guide/best-practices/) | Production-oriented guidance grouped by development, data modeling, deployment, operations, security, and release readiness |
+| [Configuration and Management](/guide/configure/overview.md) | Local peer configuration, genesis, client configuration, keys, and peer management |
+| [Security](/guide/security/) | Key handling, operational security, VPNs, fraud monitoring, and permission hygiene |
+| [Advanced Operations](/guide/advanced/metrics.md) | Metrics, performance checks, chaos testing, hot reload, and bare-metal operation |
+
+## Recommended Path
+
+1. [Install Iroha 3](/get-started/install-iroha-2.md) and
+ [launch a local network](/get-started/launch-iroha-2.md).
+2. Pick an [SDK tutorial](/guide/tutorials/) and submit a small
+ transaction.
+3. Review
+ [Application Development](/guide/best-practices/application-development.md)
+ and [Data Modeling](/guide/best-practices/data-modeling.md) before
+ shaping an application API.
+4. Use [Network Deployment](/guide/best-practices/network-deployment.md),
+ [Operations](/guide/best-practices/operations.md), and
+ [Security and Access](/guide/best-practices/security-and-access.md)
+ before running a shared or production network.
+5. Follow [Release Readiness](/guide/best-practices/release-readiness.md)
+ when promoting from local development to Taira, Minamoto, or another
+ live deployment.
diff --git a/src/guide/reports/csd-rtgs.md b/src/guide/reports/csd-rtgs.md
new file mode 100644
index 000000000..ddc0e8602
--- /dev/null
+++ b/src/guide/reports/csd-rtgs.md
@@ -0,0 +1,3 @@
+# CSD/RTGS linkages Proof of concept
+
+In this document we will describe the CSD/RTGS linkages PoC execution via Iroha. This is a project which was done in collaboration with the Asian Development Bank, and Fujitsu. Other participants used technologies such as R3 Corda, Hyperledger Cactus (Cacti), Hyperledger Fabric and many other popular blockchain solutions.
diff --git a/src/guide/security/fraud-monitoring.md b/src/guide/security/fraud-monitoring.md
new file mode 100644
index 000000000..e523a5afd
--- /dev/null
+++ b/src/guide/security/fraud-monitoring.md
@@ -0,0 +1,134 @@
+# Fraud Monitoring
+
+Fraud monitoring for an Iroha deployment is an operational control built around
+ledger events, queries, permissions, and application context. Iroha records what
+was submitted, accepted, rejected, and committed. Your monitoring system decides
+which patterns are suspicious for your business process and routes those cases
+to reviewers or automated response controls.
+
+Treat fraud monitoring as a separate service rather than logic embedded in a
+validator. The service should subscribe to ledger activity, enrich it with
+off-chain risk context, persist evidence, and submit response transactions only
+through accounts that have explicit permissions.
+
+## Monitoring Model
+
+A useful monitoring pipeline has four stages:
+
+1. **Collect** ledger and operator signals from Torii event streams, queries,
+ and metrics.
+2. **Enrich** events with off-chain context such as customer status,
+ counterparty lists, application session identifiers, expected limits, and
+ case IDs.
+3. **Detect** suspicious behavior with deterministic rules, reviewer queues, or
+ risk scoring.
+4. **Respond** by alerting operators, pausing application-side workflows,
+ revoking unnecessary permissions, or submitting compensating transactions
+ when your governance process allows it.
+
+Keep policy decisions outside consensus unless every validator must replay the
+same decision. Runtime validation should enforce permissions and transaction
+validity. Fraud monitoring should explain risk, preserve evidence, and help
+operators act quickly.
+
+## Signals to Collect
+
+Start with narrow subscriptions and add broader streams only for investigation:
+
+| Signal | Source | Use |
+| --- | --- | --- |
+| Transaction status | Pipeline events | Detect repeated rejections, failed authorization attempts, and unusual submission patterns |
+| Account lifecycle and metadata | Data events and account queries | Detect new accounts, alias changes, identity updates, and unexpected metadata edits |
+| Asset balances and transfers | Asset data events and asset queries | Detect high-value movement, rapid fan-out, balance drains, and unusual counterparties |
+| Roles and permissions | Role and permission queries, role data events | Detect privilege escalation, emergency grants, and stale high-risk access |
+| Trigger and contract changes | Trigger, contract, and executor events | Detect new automation, changed execution paths, and suspicious upgrade activity |
+| Configuration and peer changes | Configuration and peer events | Detect governance changes that affect validation, networking, or operator visibility |
+| Operator health | `/metrics` and Sumeragi status routes | Separate suspicious user behavior from node overload, queue pressure, or network faults |
+
+Use [event filters](/blockchain/filters.md) to avoid processing the entire event
+stream when a rule only needs accounts, assets, roles, or configuration changes.
+For periodic reconciliation, combine the stream with paginated
+[queries](/blockchain/queries.md) so the monitor can recover after downtime.
+
+## Detection Rules
+
+Common rule families include:
+
+| Rule family | Example condition | Typical response |
+| --- | --- | --- |
+| Velocity | An account transfers more than the expected amount or count within a short window | Alert reviewers and pause application-side withdrawals for that account |
+| Fan-out | Funds move from one account to many newly seen accounts | Require manual approval before allowing additional transfers |
+| Balance drain | A large share of an account balance leaves shortly after a key, alias, or metadata change | Escalate as possible account takeover |
+| Privilege escalation | A high-risk permission or role is granted outside a change window | Alert operators and review the grant transaction |
+| Rejection burst | One signer or client produces repeated rejected transactions | Check for credential abuse, integration errors, or probing |
+| Automation change | A trigger, contract, or executor-related object changes unexpectedly | Pause dependent workflows until the change is reviewed |
+| Governance-sensitive change | Peer, configuration, or runtime state changes occur without an approved ticket | Compare against the governance record and incident process |
+
+Rules should be explicit about the evidence they require, the time window they
+evaluate, the action they take, and the person or system that can close the
+case. Thresholds that depend on customer risk, asset type, or jurisdiction
+belong in your monitoring service configuration, not in ad hoc scripts.
+
+## Response Controls
+
+Design response actions before enabling alerts. A high-severity fraud case
+should have a documented path from detection to containment:
+
+- notify the security, operations, and business owners responsible for the
+ affected domain or asset definition
+- preserve the event cursor, block hash, transaction hash, authority, payload,
+ and query snapshot used by the detection rule
+- pause application-side actions that are outside the ledger, such as checkout,
+ withdrawal, signing, bridge, or settlement workflows
+- revoke roles or permissions that are no longer justified by the incident
+ response plan
+- submit follow-up ledger transactions only when the active governance policy
+ and permission model allow them
+- rotate keys when the evidence suggests signer compromise
+
+Avoid giving the monitoring service broad write access. Use a dedicated
+technical account with the smallest set of permissions required for the response
+actions it is allowed to perform. Human approval should remain part of any
+workflow that can move assets, change permissions, or alter validator-facing
+configuration.
+
+## Evidence and Retention
+
+Store monitoring evidence in an append-only system that is separate from the
+validator data directory. Each alert should include:
+
+- event stream name and cursor
+- block height or block hash when available
+- transaction hash and authority
+- affected account, domain, asset, role, trigger, or configuration ID
+- raw event payload or a canonical hash of it
+- query snapshots used to enrich the alert
+- rule name, version, threshold, score, and reviewer decision
+
+Do not store sensitive investigation notes as public ledger metadata unless the
+network's data governance policy explicitly allows it. If you need to link an
+off-chain case to on-chain state, prefer a case identifier, signed attestation,
+or hash commitment that does not expose private details.
+
+## Implementation Checklist
+
+- Enable the telemetry profile needed for `/metrics` and operator routes.
+- Subscribe to Torii event streams with narrow filters for the objects you
+ monitor.
+- Persist event cursors so the monitor can resume without gaps.
+- Reconcile streams with paginated queries on a regular schedule.
+- Keep risk thresholds and allow lists in version-controlled configuration.
+- Test alert rules against historical blocks before enabling automated actions.
+- Use dedicated technical accounts for response actions.
+- Review role and permission grants on a recurring schedule.
+- Include fraud-monitoring alerts in the incident response process.
+
+## Related Pages
+
+- [Events](/blockchain/events.md)
+- [Filters](/blockchain/filters.md)
+- [Queries](/blockchain/queries.md)
+- [Permissions](/blockchain/permissions.md)
+- [Performance and Metrics](/guide/advanced/metrics.md)
+- [Torii endpoints](/reference/torii-endpoints.md)
+- [Operational Security](/guide/security/operational-security.md)
diff --git a/src/guide/security/generating-cryptographic-keys.md b/src/guide/security/generating-cryptographic-keys.md
new file mode 100644
index 000000000..4e9bee692
--- /dev/null
+++ b/src/guide/security/generating-cryptographic-keys.md
@@ -0,0 +1,72 @@
+# Generating Cryptographic Keys
+
+Use `kagami keys` to generate client, peer, and validator key material for the
+current Iroha 2 and Iroha 3 codebase.
+
+## Basic Usage
+
+From the Iroha source checkout:
+
+```bash
+cargo run --bin kagami -- keys --algorithm ed25519
+```
+
+JSON output is usually easiest to copy into TOML or automation:
+
+```bash
+cargo run --bin kagami -- keys --algorithm ed25519 --json
+```
+
+The command prints a public key and an exposed private key. Treat the private
+key as secret material; do not commit generated production keys.
+
+## Algorithms
+
+Common algorithms are:
+
+- `ed25519` for client accounts, streaming identities, and most development
+ networks.
+- `secp256k1` when you need a secp256k1 account identity.
+- `bls_normal` for validator consensus keys when the build enables BLS support.
+
+Check the exact algorithms supported by your build with:
+
+```bash
+cargo run --bin kagami -- keys --help
+```
+
+## Deterministic Development Keys
+
+For reproducible fixtures, pass a seed:
+
+```bash
+cargo run --bin kagami -- keys --algorithm ed25519 --seed "dev-alice" --json
+```
+
+Seeds are private-key material. Use them only for local development and tests.
+
+## BLS Proofs-of-Possession
+
+NPoS and Nexus validator profiles require BLS validator keys and PoPs:
+
+```bash
+cargo run --bin kagami -- keys --algorithm bls_normal --pop --json
+```
+
+The JSON includes `pop_hex` when `--pop` is used. Use that value with the
+generated topology or `trusted_peers_pop` entries required by the profile.
+
+## Output Formats
+
+Use the default output for terminal inspection, `--json` for automation, and
+`--compact` when another script needs plain line-oriented values:
+
+```bash
+cargo run --bin kagami -- keys --algorithm ed25519 --compact
+```
+
+For full generated Kagami help:
+
+```bash
+cargo run -p iroha_kagami -- advanced markdown-help > crates/iroha_kagami/CommandLineHelp.md
+```
diff --git a/src/guide/security/index.md b/src/guide/security/index.md
new file mode 100644
index 000000000..3b8895d1c
--- /dev/null
+++ b/src/guide/security/index.md
@@ -0,0 +1,39 @@
+# Security
+
+When utilizing Iroha—or any other blockchain ledger for that matter—security is paramount for financial organizations, as it forms the foundation of trust in an industry where sensitive financial data and transactions are routine. A successful security breach performed by a malicious party can lead to devastating consequences for you. Therefore establishing preemptive security measures is essential to protect the integrity and confidentiality of your sensitive data.
+
+### Navigation
+
+In this section you can learn about various aspects of securing your Iroha network. To learn more, choose one of the following topics:
+
+- [Security Principles](./security-principles):
+
+ The core security principles that individuals and organizations can adopt to protect their data and decrease the chance of a breach and/or leak.
+
+- [Virtual Private Networks](./vpn.md):
+
+ How to use a VPN to restrict peer-to-peer, Torii, and operator access in private or consortium deployments.
+
+- [Operational Security](./operational-security.md):
+
+ Best practices for securing the day-to-day operations of your network, including access controls, monitoring, incident responses, the use of browsers, etc.
+
+- [Fraud Monitoring](./fraud-monitoring.md):
+
+ How to use ledger events, queries, permissions, and operational signals to detect suspicious activity and preserve response evidence.
+
+- [Password Security](./password-security.md):
+
+ A deep-dive into password entropy, creating strong passwords and avoiding password vulnerabilities.
+
+- [Public Key Cryptography](./public-key-cryptography.md):
+
+ An introduction into public key cryptography, encryption, signatures, and their role in establishing secure communication within the blockchain.
+
+ - [Generating Cryptographic Keys](./generating-cryptographic-keys.md):
+
+ Instructions on how to generate cryptographic keys and use `kagami` (a supporting tool shipped alongside Iroha).
+
+ - [Storing Cryptographic Keys](./storing-cryptographic-keys.md):
+
+ Best practices for securing your cryptographic keys with a number of different approaches that can also be combined.
diff --git a/src/guide/security/operational-security.md b/src/guide/security/operational-security.md
new file mode 100644
index 000000000..185016114
--- /dev/null
+++ b/src/guide/security/operational-security.md
@@ -0,0 +1,95 @@
+# Operational Security
+
+Operational Security (OPSEC) is a systematic approach to security and risk management, which is essentially a collection of strategies and advice adopted for specific use-cases with the aim of preventing unauthorized access and data leakage.
+
+OPSEC is the standard practice for most companies to guarantee the availability and stability of their assets. This includes considering such factors as physical security (e.g., making sure that unattended post-it notes do not contain sensitive data), secure communication protocols (e.g., not sending sensitive data over unencrypted SMS), threat analysis (e.g., determining potential malicious parties, learning about the latest attack methods), personnel training (e.g., without employees following OPSEC measures, they _will_, sooner or later, prove to be ineffective), and risk mitigation (e.g., encrypting your hard drives and USB devices).
+
+Since Iroha is likely to be deployed as a financial ledger, OPSEC measures and practices must be taken seriously. This topic describes strategies and approaches that individuals and organizations using Iroha in their operations should consider as part of their extensive security protocol.
+
+Following and adopting the guidelines in this topic is a necessary step towards achieving total security, however, it is not sufficient on its own. To further improve your security, learn more throughout the rest of the [Security](./index.md) section and specifically the following topics:
+
+- [Security Principles](./security-principles.md)
+- [Password Security](./password-security.md)
+
+## Recommended OPSEC Measures
+
+- Stay vigilant. The [most likely](https://arxiv.org/pdf/2209.08356.pdf) way in which one can lose their assets in a blockchain is by giving away their sensitive details.
+
+- Encrypt your disks. Encrypting boot devices allows them to protect your data even if an attacker have gained access to the hardware. Doing it for your portable devices is twice as important.
+
+- Use trusted software. Software that ships via reproducible binary builds, and that you build from source, is the most trustworthy. Proprietary or open-source software that hasn't been audited is a potential risk that must be taken seriously.
+
+- Never leave portable devices with sensitive data unattended. A split second is enough to steal your device.
+
+- Verify the signatures on binary packages. This is not too different from the public key cryptography used inside Iroha.
+
+- To prevent unauthorized access, always secure your laptop or personal computer when leaving it unattended. Use strong passwords, lock the screen, and follow best practices for securing your devices.
+
+- Establish a secure [air-gapped](https://en.wikipedia.org/wiki/Air_gap_(networking)) location for your keys. First, encrypt the keys, then store them in an _offline-only_ device, ideally with electromagnetic shielding installed. [Hardware keys](./storing-cryptographic-keys.md#using-a-hardware-key) are specifically designed this purpose.
+
+- Always keep your software updated to their latest version across all devices, including computers and phones. Regular updates help patch vulnerabilities and minimise potential risks associated with outdated software, even before such vulnerabilities are disclosed.
+
+- Develop a routine for periodically updating passwords and cryptographic keys. This proactive approach significantly contributes to enhancing overall security posture, since it is much harder to hit a moving target.
+
+## Using Browsers
+
+If an application connected to Iroha features a web UI, your browser can either aid the security or pose a potential threat. It is essential to exercise caution, especially when it comes to the plugins you choose to install.
+
+Consider the following measures to enhance your browsing security:
+
+- Avoid using browsers that are known for having bad security models and for leaking their users' data.
+
+ You can look up privacy violations and security issues for any browser. For example, [this article on browser privacy](https://www.unixsheikh.com/articles/choose-your-browser-carefully.html) discusses a variety of browsers and how secure they are. Note that proprietary browsers (such as Chrome, Safari, Opera, Vivaldi, Edge, and others) are generally tremendously harder to audit due to their code being hidden from public, which means that you cannot be sure how secure they are.
+
+- Give preference to browsers with solid history of valuing and protecting their users' privacy and security:
+ - [Librewolf](https://librewolf.net/), [Icecat](https://www.gnu.org/software/gnuzilla/), [Firedragon](https://github.com/dr460nf1r3/firedragon-browser), etc. — well established forks of Mozilla Firefox with added security features.
+ - [Ungoogled chromium](https://github.com/ungoogled-software/ungoogled-chromium) — a highly audited open-source version of Google Chrome that is enhanced with additional security measures and has all of the Google-related web services removed.
+ - [Brave](https://brave.com/) — a highly audited open-source version of [Google Chromium](https://www.chromium.org/Home/) that is enhanced with additional security measures; has a built-in VPN and ad blocker functionality.
+ - [Falkon](https://www.falkon.org/) — an open-source Qt-based web browser (built on `QtWebEngine`, a wrapper for [Google Chromium](https://www.chromium.org/Home/)) with known track record of being secure; has a number of extensions available for download from its [KDE store page](https://store.falkon.org/browse/).
+ - [Qutebrowser](https://qutebrowser.org/) — an open-source Qt-based web browser (built on `QtWebEngine`, a wrapper for [Google Chromium](https://www.chromium.org/Home/)) with known track record of being secure; has a unique keyboard-focused approach with minimalist GUI; considered to be a browser of choice for many security specialists.
+
+- Avoid enabling `JavaScript` unless necessary.
+
+- Use the browser's built-in confinement mechanism for plugins to restrict the access rights that the installed plugins have.
+
+- Clear cookies before and after important operations. Be mindful not to enable the **Keep Me Signed In** or **Remember me** feature. Keep in mind that some websites have this feature enabled by default.
+
+- Use an ad blocker. These not only block ads but also disable site tracking features. Depending on the browser you use, an ad blocker may not be a built-in feature.
+
+- Be mindful of lookalike characters (e.g., `0`, `θ`, `O`, `О`, `ዐ` and `߀` are six different characters). Paying attention to details like this may save you from a phishing attack.
+
+- Avoid web UI email clients in favour of desktop clients. Before using it, set up your desktop email client to sign and verify GPG key signatures.
+
+- Avoid using web-based messaging services. For instance, Discord (built with the infamous `electron` framework) is susceptible to many of the same attacks as would a Google Chromium window with the web version of Discord open.
+
+- Update your browser to the latest version whenever possible. Updates often include critical security patches that address vulnerabilities.
+
+- Be cautious of what browser extensions you install. Only use well-known and trusted extensions from reputable sources. Rogue extensions can compromise your data and privacy.
+
+- Create separate browser profiles for various tasks. Use one profile for everyday browsing and another for activities involving high security and sensitive data. This way, extensions installed on the profile for everyday browsing cannot access the sensitive data from the secure one.
+
+- Use a portable version of your browser copied to a USB flash drive. This method ensures that even if a security bug grants one of the installed plugins with access to data between the profiles, your security-related profile remains on a separate and removable device.
+
+- Periodically clear your browser's cache and cookies to remove potentially sensitive data that may accidentally be stored on your device.
+
+## Recovery Plan
+
+In the event of an emergency, such as losing a key or facing a security breach, a well-structured and prepared in advance recovery plan is an essential lifeline. Creating a clear set of steps to follow can help mitigate potential damage and promptly reinstate security.
+
+Organizations should consider the following key aspects when developing their recovery plan:
+
+- Outline step-by-step procedures to be followed in case of key loss or other security incidents. Ensure that these steps are easily accessible and understandable by the users and/or employees.
+
+- Establish a communication channel that may be used to promptly report security breaches and potential threats, such as leaked or lost cryptographic keys and password.
+
+- If you utilize hardware keys (e.g., [YubiKey](https://www.yubico.com/products/) or [SoloKeys Solo](https://solokeys.com/collections/all)) as a security measure, consider adopting redundancy strategy. Keep two keys: one for daily use and another stored in a secure location. This precaution ensures access even if the primary key is compromised or lost.
+
+- When security breaches or leaks are reported, react promptly by replacing or disabling affected keys and passwords. This proactive response minimizes the potential risks and damage.
+
+- Periodically review and update your recovery plan. This ensures that the plan remains relevant and effective as your security landscape evolves.
+
+::: warning
+
+Remember that a recovery plan is not just another document. Rather, it's a lifeline that helps navigate unexpected challenges. By anticipating potential scenarios and establishing a clear roadmap for action, you fortify your operational security and enhance your readiness to respond effectively to any security incident.
+
+:::
diff --git a/src/guide/security/password-security.md b/src/guide/security/password-security.md
new file mode 100644
index 000000000..c380920a5
--- /dev/null
+++ b/src/guide/security/password-security.md
@@ -0,0 +1,79 @@
+# Password Security
+
+In the realm of blockchain security, protecting passwords is paramount. To ensure your data and everything it represents remain impervious to unauthorized access, let's delve into the nuances of password security.
+
+## Password Strength
+
+Likely enough, you may have previously encountered recommendations on how to come up with a _strong_ password. These may entail such advice as minimum password length, addition of special characters, etc. Such recommendations aim to increase the strength of your password that hinges on entropy, i.e. randomness of the password.
+
+So, what defines a _strong password_? A strong password is a password with _high entropy_.
+
+To calculate the entropy of a password, we may follow the **Entropy formula**:
+
+::: tip Entropy formula
+
+$L$ — Password length; number of symbols in the password.\
+$S$ — Character set; size of the pool of unique possible symbols.\
+$S^L$ — Number of possible combinations.
+
+$$Entropy=log_2(S^L)$$
+
+The resulting number is the amount of entropy bits in a password. The higher the number, the harder the password is to crack.
+
+Knowing the entropy value, the amount of attempts required to brute-force a password with said entropy can be derived by using the following formula:
+
+$$S^L=2^Entropy$$
+
+There is no universal answer as to how high the entropy of a password should be. For financial organizations, it is advised to keep the entropy of their passwords in the range from `64` to `127` bits (`128` bits or more is generally considered to be an overkill). However, keep in mind that GPU s keep constantly evolving, and the time required for password cracking keeps decreasing over time.
+
+:::
+
+Following the entropy formula, let us compare the following two examples:
+
+ 1. A 16-character password with the character set utilizing only lowercase letters of the modern English alphabet (26 characters) yields approximately 43 sextillion ($43*10^21$) possible combinations.
+
+ $$Entropy=log_2(26^{16})=log_2(43,608,742,899,428,874,059,776)=75.20703...$$
+
+ 2. A 16-character password with the character set expanded to 96, including uppercase letters and special symbols, inflates the number of possible combinations to a staggering 52 nonillion ($52*10^30$), improving entropy significantly.
+
+ $$Entropy=log_2(96^{16})=log_2(52,040,292,466,647,269,602,037,015,248,896)=105.35940... $$
+
+As can be seen, even by only expanding the character set from 26 to 96 symbols, the number of possible combinations that a malicious party would need to bruteforce has expanded by $1.1933*10^9$ times.
+
+Additionally increasing the length of the password, will grow the number of possible combinations even further, therefore enhancing the entropy—strength—of the password.
+
+However, instead of wrestling with complexities, we advise using a password manager program—like [KeePassXC](https://keepassxc.org/) (for more details, see _[Adding a Password Manager Program](./storing-cryptographic-keys.md#adding-a-password-manager-program)_ and _[Configuring KeePassXC](./storing-cryptographic-keys.md#configuring-keepassxc)_)—to generate and securely store your passwords.
+
+::: tip
+
+Certain websites limit the maximum possible entropy of passwords, i.e., either limit the maximum password length or the set of accepted characters, or both.
+
+Keep this in mind when using such websites and aim to periodically update your passwords.
+
+:::
+
+## Password Vulnerabilities
+
+Passwords can fall victim to brute-force attacks, typically executed using powerful GPUs in conjunction with dictionaries or exhaustive iteration through all possibilities. To thwart such attempts, craft a unique password devoid of personal information like birthdays, addresses, phone numbers, or social security numbers. Avoid providing attackers with easily guessable clues.
+
+So, how hard it is to crack a modern password? It really depends on who you ask.
+
+With a setup like [Kevin Mitnick](https://en.wikipedia.org/wiki/Kevin_Mitnick)'s [cluster setup](https://twitter.com/kevinmitnick/status/1649421434899275778?s=20) housing 24 NVIDIA® GeForce RTX 4090's and 6 NVIDIA® GeForce RTX 2080's, all of them running [Hashtopolis](https://github.com/hashtopolis) software, he used to crack passwords that supposed to take a year in mere half a month.
+
+However, let's now compare it to a single RTX 4090, capable of processing through 300 H/s using [`NTLM`](https://www.tarlogic.com/cybersecurity-glossary/ntlm-hash) and 200 H/s using [`bcrypt`](https://en.wikipedia.org/wiki/Bcrypt), as outlined in [this tweet](https://twitter.com/Chick3nman512/status/1580712040179826688).
+
+As an extension of our previous entropy calculations, let's now examine the following projected cracking times:
+
+ 1. There are $31,540,000$ seconds in a regular non-leap year. Assuming the worst-case scenario with `NTLM`, at the speed of $300*10^9$ H/s , it would take a single RTX 4090 approximately $4,608.83$ years to crack a 16-character password with a character set of 26 letters of the modern English alphabet.
+
+ 2. If instead of `NTLM` we use `bcrypt`, therefore reducing the iteration speed to $200*10^3$ H/s , while also expanding the character set to 96, including uppercase letters and special symbols, the time to crack soars to about $8,249,887,835,549,662,270.456$ years, far surpassing the age of the universe.
+
+So, simply picking higher entropy raised the time it takes to crack a password to unfathomable numbers. Yes, the process may be sped up by using multiple GPUs, however this method pales in comparison with the [XKCD approach](https://xkcd.com/538/).
+
+It is important to note that an extensive character set isn't always necessary to reach high entropy. It can be obtained by using multi-word passwords, or lengthy sentences in particular. The classic [XKCD comic](https://xkcd.com/936/) illustrates this concept eloquently.
+
+::: warning
+
+Avoid writing your password down anywhere. Store your password recovery phrase securely. If the phrase is too long, you may write it down, ensuring that you can read it out and type it out later. Store the physical copy of the phrase in a secure location and/or container.
+
+:::
diff --git a/src/guide/security/public-key-cryptography.md b/src/guide/security/public-key-cryptography.md
new file mode 100644
index 000000000..b7f2148b7
--- /dev/null
+++ b/src/guide/security/public-key-cryptography.md
@@ -0,0 +1,39 @@
+# Public Key Cryptography
+
+Public key cryptography provides the means for secure communication and data protection, enabling activities such as secure online transactions, encrypted email communications, etc.
+
+Public key cryptography employs a pair of cryptographic keys—a _public_ key and a _private_ key—to create a highly secure method of transmitting information over online networks.
+
+It's easy to make a public key from a private key, but the opposite is rather difficult, if not impossible. This keeps things safe. You can freely share your public key without risking your private key, which remains secure.
+
+## Encryption and Signatures
+
+Public key cryptography allows individuals to send encrypted messages and data that can only be deciphered by the intended recipient possessing their corresponding private key. In other words, the public key functions as a lock, and the private key serves as an actual unique key that unlocks the encrypted data.
+
+This encryption process not only ensures the privacy and confidentiality of sensitive information but also establishes the authenticity of the sender. By combining the sender's private key with the public key, a digital _signature_ is created. This signature serves as a digital stamp of approval, verifying the sender's identity and the validity of the transferred data. Anyone with your _public_ key can verify that the person who initiated the transaction used your _private_ key.
+
+## Keys on the Client Side
+
+Every transaction must be signed by an account authority. The private key or
+controller material for that authority must stay secret, so client software is
+responsible for secure storage and signing.
+
+::: warning
+
+All clients are different, but plain-text client configuration is only suitable
+for development and controlled test networks. Production integrations should
+use a secret manager, hardware-backed key storage, or another audited signing
+boundary.
+
+**This is currently a reference implementation that will _not_ be a part of the production release.**
+
+:::
+
+Registering a new account entails generating controller material, such as an
+Ed25519 key pair, and submitting the public part to the network. Later
+transactions from that account must be signed by the matching private key or by
+the configured account controller policy.
+
+For public key cryptography to work effectively, avoid re-using keys when you need to specify a new key. While there's nothing stopping you from doing that, the public keys are _public_, which means that if an attacker sees the same public key being used, they will know that the private keys are also identical.
+
+Even though _private_ keys operate on slightly different principles than passwords, the advice—*to make them as random as possible, never store them unencrypted and never share them with anyone under any circumstances*—applies.
diff --git a/src/guide/security/security-principles.md b/src/guide/security/security-principles.md
new file mode 100644
index 000000000..e2fca7d73
--- /dev/null
+++ b/src/guide/security/security-principles.md
@@ -0,0 +1,99 @@
+# Security Principles
+
+Organisations and individual users need to work together to ensure secure interactions with Iroha installations. This topic explains the basic principles behind this cooperation.
+
+## General Security Principles
+
+1. Use a [Virtual Private Network](./vpn.md) (VPN):
+
+ - Whenever accessing sensitive data or resources, especially over public networks, use a VPN to establish a secure connection that safeguards your information.
+
+2. Use a firewall for network protection:
+
+ - Strengthen home and/or office networks by setting up a firewall that helps to counter unauthorized access and protect the connected devices from viruses and malware.
+
+3. Secure physical and digital information:
+
+ - Safeguard physical documents containing sensitive information in a secure location, and ensure digital documents are encrypted and stored in password-protected folders.
+
+4. Keep Regular Data Backups:
+
+ - Always have copies of your important information saved somewhere safe. This way, if you lose your data or something goes wrong, you can quickly get everything back on track. Keep these backups in a different secure place from where you usually keep your data.
+
+## Security Principles for Individual Users
+
+1. Adopt robust authentication rules:
+
+ - Utilise strong and unique passwords for all accounts.
+
+ - Never reuse passwords.
+
+ - Set up 2FA whenever possible. 2FA improves the overall security by not only requiring a password, but also an additional factor such as an OTP , fingerprint, or a third-party app-based authentication (e.g., Google Authenticator).
+
+ - Avoid using SMS authentication as the second factor. There is no guarantee that malicious software is not monitoring all of your SMS messages. For example, Android applications cannot be limited to only accessing the messages intended specifically for them.
+
+2. Exercise caution in digital communication:
+ - Set up an email client to sign and verify signatures of all the received emails. While it is possible to impersonate the sender's address and even pose as a bank, it is not possible to fake a signature.
+ - Disable both HTML messages and loading of external resources from unknown or unverified addresses.
+
+ - Learn about common phishing techniques to recognise and avoid suspicious emails, links, and requests for personal information.
+
+ - Set up an email client to sign and verify signatures of all the received emails. While it is possible to impersonate the sender's address and even pose as a bank, it is not possible to fake a signature.
+
+3. Safeguard personal information:
+
+ - When communicating with unfamiliar individuals, especially on the phone or online, be careful about sharing private information.
+
+ - Consider independently researching the individuals or organizations you are communicating with to confirm the legitimacy of their identity.
+
+ - Be mindful of the personal information you share on social media platforms, as malicious parties can exploit this information.
+
+## Security Principles for Organisations
+
+1. Establish clear security policies and procedures:
+
+ - Develop well-defined security policies and protocols for all employees dealing with sensitive data. Thoroughly train employees to adhere to these guidelines, mitigating the risk of negligent actions.
+
+ - Ensure that security policies are accessible to all employees and are regularly reviewed and updated to reflect changing security landscapes.
+
+ - Provide the security policies with examples and scenarios to make them more relatable and actionable for employees.
+
+2. Cultivate employee awareness:
+
+ - Educate employees about data and operational security measures. Heightened awareness and comprehensive training are pivotal in fortifying organizational security.
+
+ - Encourage employees to report any suspicious activities or security concerns promptly.
+
+3. Protect physical infrastructure:
+
+ - Restrict physical entry to servers and infrastructure. Set up access controls that only allow authorised personnel to enter restricted areas.
+
+ - Ensure that access control measures are regularly reviewed and updated to align with evolving security needs.
+
+ - Consider implementing biometric access controls for sensitive areas to enhance physical security.
+
+4. Deploy security monitoring:
+
+ - Enforce a comprehensive security monitoring system that scrutinizes activities and identifies potential security breaches.
+
+ - Implement automated alerts to promptly notify security personnel of any unusual or unauthorized activities.
+
+ - Consider using machine learning algorithms to enhance the system's ability to detect anomalies and potential threats.
+
+ - Employ staff or designate personnel to oversee database security, identify, track and address software vulnerabilities, and conduct regular checks on critical machines for the presence of unauthorized software not included in the approved list.
+
+5. Conduct recurring security audits:
+
+ - Perform routine security audits to evaluate vulnerabilities and confirm that established security measures align with the commonly-accepted standards and regulations.
+
+ - Consider hiring external security experts for periodic assessments to gain an impartial evaluation of your organization's security condition.
+
+6. Implement an access control system:
+
+ - Set up a role-based access control system to ensure that employees only have access to the resources and information necessary for their roles.
+
+7. Embrace Continuous Improvement:
+
+ - Recognize that security is a continuous process. Maintain ongoing assessment of security measures and proactively enhance them to address emerging threats and challenges.
+
+ - Consider establishing a feedback loop that encourages employees to contribute security improvement suggestions, fostering the culture of continuous enhancement.
diff --git a/src/guide/security/storing-cryptographic-keys.md b/src/guide/security/storing-cryptographic-keys.md
new file mode 100644
index 000000000..c3b2b312c
--- /dev/null
+++ b/src/guide/security/storing-cryptographic-keys.md
@@ -0,0 +1,137 @@
+# Storing Cryptographic Keys
+
+Your sensitive data only remains private if you adopt OPSEC practices to protect the cryptographic keys. Social engineering threats, where someone posing as a figure with authority tries to manipulate you into giving them your private cryptographic key, are real. Always be cautious and avoid sharing your private key, treating it as you would your apartment keys—reserved for trusted individuals only.
+
+For more information on OPSEC and its best practices, see [Operational Security](./operational-security).
+
+## Storing Cryptographic Keys Digitally
+
+When it comes to protecting cryptographic keys digitally, mainly only two approaches—[SSH](https://www.ssh.com/) and [GPG](https://www.gnupg.org/)—are available. These methods provide layers of security to prevent unauthorized access to your cryptographic keys.
+
+Many Iroha architectural decisions have been influenced by the principles of the **Secure Shell** (`SSH`) protocol, which is why this section primarily focuses on the `SSH` approach, offering instructions on how to effectively implement the protocol for storing your cryptographic keys within the Iroha ecosystem.
+
+### Using SSH and SSH Agent
+
+**Secure Shell Protocol** (`SSH`) is a cryptographic network protocol that serves as a virtual gateway, enabling secure access to remote machines via potentially not-so-secure networks by using SSH keys—access credentials. It provides an efficient way to remotely interact with systems without the necessity of physical presence. In this context, `SSH` offers two primary authentication mechanisms: the conventional password-based approach and the more secure public-private key pair method.
+
+For more information on `SSH`, see [the related SSH Academy topic](https://www.ssh.com/academy/ssh).
+
+To streamline the login process and bypass the need for repetitive input, it is possible to pair the `SSH` keys with the **SSH Agent** (`ssh-agent`)—the assistant program that remembers your `SSH` keys and/or password for the duration of a session. This setup permits the `SSH` gateway to effortlessly access the keys whenever it connects to other machines.
+
+The workflow here is as follows: you have your public key stored on a remote system and keep your private key secure. Whenever you want to access a remote system, the `ssh-agent` steps in to communicate your _public_ key to the accessed system. The remote system then sends back a [challenge](https://en.wikipedia.org/wiki/Challenge%E2%80%93response_authentication) that only your _private_ key can properly respond to. Your `ssh-agent` handles this challenge by using your _private_ key and sends the correct response back to the remote system. If the response matches what the system expected, you're granted access.
+
+The beauty of the `ssh-agent` is that it holds onto your private key during your session, so there is no need to keep entering your password or private key passphrase every time you connect to a remote system.
+
+For more information on the `ssh-agent`, see [the related SSH Academy topic](https://www.ssh.com/academy/ssh/agent).
+
+::: info Note
+
+For a detailed overview of the `SSH` protocol and the `ssh-agent` tool, see the following [SSH Academy](https://www.ssh.com/academy) topics:
+
+ - [What is SSH (Secure Shell)?](https://www.ssh.com/academy/ssh)
+ - [ssh-agent: How to configure ssh-agent, agent forwarding, & agent protocol](https://www.ssh.com/academy/ssh/agent)
+
+:::
+
+### Adding a Password Manager Program
+
+It is recommended to enhance the security of your `SSH` keys by protecting them with a password, which acts as an additional obstacle in the way of malicious parties aiming to obtain your sensitive information.
+
+A variety of password managers can be used to store user passwords and `SSH` keys temporarily. For the sake of clarity, [KeePass](https://keepass.info/) is used as an example password manager, specifically, the [KeePassXC](https://keepassxc.org/) port running on Linux-based operating systems.
+
+For instructions on how to set up KeePassXC see the [Configuring KeePassXC](#configuring-keepassxc) section below.
+
+
+
+KeePassXC offers enhanced security, flexibility, and control. It not only stores passwords but also the `SSH` keys. When used for key storage, this password manager provides the `ssh-agent` with the stored keys, which are then promptly removed from its memory once the KeePassXC window is closed.
+
+::: tip
+
+Theoretically, any of the KeePass ports [listed on the official website](https://keepass.info/download.html) can be utilized for key storage purposes.
+We recommend any of the following: [KeePassX](https://www.keepassx.org/) or [KeePassXC](https://keepassxc.org/).
+
+:::
+
+#### Configuring KeePassXC
+
+To configure KeePassXC, perform the following steps:
+
+1. Launch KeePassXC, then go to **Tools** > **Settings**, or select the **Gear** button from the top UI panel.
+
+2. In the **Application Settings** tab that appears, select **SSH Agent** from the left menu, and then select the **Enable SSH Agent integration** checkbox.
+
+ ::: info Show reference screenshot
+
+ 
+
+ :::
+
+3. Create a new KeePassXC Database. For instructions, see [KeePassXC User Guide > Creating Your First Database](https://keepassxc.org/docs/KeePassXC_UserGuide#_creating_your_first_database).
+
+4. For every key that you would like to store in the KeePassXC Database you created, perform the following steps:
+
+ - Add a new entry in the database. For instructions, see [KeePassXC User Guide > Creating Your First Database](https://keepassxc.org/docs/KeePassXC_UserGuide#_creating_your_first_database).
+
+ - When adding a new entry, attach the file containing the key by doing the following: select **Advanced** from the left menu, then select **Add** in the **Attachments** section, choose the required file in the **Select files** window that appears.
+
+ - When adding a new entry, select **SSH Agent** from the left menu, then select the key file you added from the **Attachment** menu in the **Private key** section; then select the following checkboxes:
+
+ - **Add key to agent when database is opened/unlocked**
+
+ - **Remove key from agent when database is closed/locked**
+
+ - **Require user confirmation when this key is used**
+
+ - If necessary, make other changes to the entry.
+
+ - When ready, select **OK** to save the entry.
+
+ ::: details Show reference screenshots
+
+ 
+
+ 
+
+ :::
+
+##### Expected Results
+
+- Cryptographic and `shh` keys are stored as entries in a KeePassXC Database that can be accessed while the KeePassXC window is open.
+
+- Stored cryptographic and `ssh` keys can be used whenever they are required for authorization.
+
+- Stored cryptographic and `ssh` keys are removed from the `ssh-agent` once the KeePassXC window is closed.
+
+::: info Note
+
+Without enabling the **Require user confirmation when this key is used** option, the `ssh-agent` may not monitor the process that provided it with a key. In the event that the password manager process is terminated by malware or a system service through a `SIGKILL` signal, the key is likely to remain in the `ssh-agent`, as Unix system programs cannot intercept `SIGKILL`.
+
+:::
+
+## Storing Cryptographic Keys Physically
+
+For those who seek the highest level of offline security, the option of storing cryptographic keys physically ensures that the keys remain completely disconnected from digital networks, thus minimizing the risk of unauthorized access. Acknowledging the physical option underscores our commitment to catering to diverse security needs.
+
+### Using a Hardware Key
+
+Our team considers hardware keys to be one of the best safety measures. A hardware key—a compact device that connects via a USB port and has the size of a typical flash drive—only processes security-related events when it is connected to a machine. This allows you to easily disconnect the device in case of a security breach, or simply reconnect it to a different machine whenever it is required.
+
+However, since there are many brands of hardware keys—each with their unique APIs—it is important to research the market to find the key that best suits your needs.
+
+So far, our team has internally tested the [YubiKey 5C](https://www.yubico.com/il/product/yubikey-5c/) hardware key which proved to have many positive features, including versatile API functionality.
+
+However, there's a potential drawback to consider. Implementing the [HMAC challenge-response authentication](https://en.wikipedia.org/wiki/Challenge%E2%80%93response_authentication) and storing a corresponding _private_ key for this response could create a vulnerability. This setup might inadvertently enable attackers to make educated guesses about the information stored within the YubiKey 5C's memory, thereby compromising the overall security.
+
+Luckily, this vulnerability can be mitigated by adopting an alternative approach to utilizing the YubiKey 5C. The idea is to use YubiKey 5C to securely access a KeePassXC database storing your cryptographic and `SSH` keys. This method can even be considered beneficial, since it surpasses the security of most passwords and makes it necessary for the malicious party to be in possession of your hardware key in case the KeePassXC database is leaked.
+
+::: info
+
+To read more about _the method above_, see the answer by one of the KeePassXC developers—[Janek Bevendorff](https://github.com/phoerious)—to the following StackExchange question:
+
+[Is it reasonable to use KeePassXC with YubiKey?](https://security.stackexchange.com/questions/201345/is-it-reasonable-to-use-keepassxc-with-yubikey/258414#258414)
+
+:::
+
+### Using a Mnemonic Phrase
+
+Alternatively, you can memorize a private key as a series of words, known as a _mnemonic phrase_. This method, used in many wallets, requires remembering around 25 specific words. Most password managers, including the previously discussed KeePassXC, offer mnemonic passphrase generation.
diff --git a/src/guide/security/vpn.md b/src/guide/security/vpn.md
new file mode 100644
index 000000000..cb7a90a81
--- /dev/null
+++ b/src/guide/security/vpn.md
@@ -0,0 +1,120 @@
+# Virtual Private Networks
+
+A VPN is a network control that
+limits who can reach Iroha services. It is most useful for private and
+consortium deployments where validators, application backends, and operators
+should communicate over private addresses instead of open internet routes.
+
+A VPN does not replace Iroha peer keys, account keys, permissions, firewall
+rules, monitoring, or secure key storage. Treat it as one layer in the
+deployment boundary: the VPN narrows network reachability, while Iroha
+configuration and governance decide which peers and accounts are trusted.
+
+## When to Use a VPN
+
+Use a VPN when:
+
+- validators are operated by different organizations or in different hosting
+ environments
+- Torii should only be reachable by application backends, operators, or trusted
+ clients
+- metrics, logs, SSH, or other administration endpoints must stay on a private
+ operator network
+- a test or staging network should resemble production access controls without
+ exposing public endpoints
+
+A VPN is not required for every deployment. Public networks may intentionally
+expose Torii through a public gateway, load balancer, or reverse proxy. Even in
+that case, keep validator peer-to-peer traffic and administration endpoints on a
+restricted network whenever possible.
+
+::: tip
+
+A browser VPN only protects traffic from that browser. It does not protect
+`irohad`, CLI, SDK, SSH, metrics, or backup traffic unless those processes are
+routed through the same private network.
+
+:::
+
+## Deployment Pattern
+
+For a private validator mesh, give every validator a stable VPN address or
+private DNS name. Configure peers so their advertised peer-to-peer addresses are
+reachable from the other validators over that network:
+
+```toml
+trusted_peers = [
+ "PUBLIC_KEY_1@10.20.0.11:1337",
+ "PUBLIC_KEY_2@10.20.0.12:1337",
+ "PUBLIC_KEY_3@10.20.0.13:1337",
+ "PUBLIC_KEY_4@10.20.0.14:1337",
+]
+
+[network]
+address = "10.20.0.11:1337"
+public_address = "10.20.0.11:1337"
+
+[torii]
+address = "10.20.0.11:8080"
+```
+
+Use the address assigned to the current peer in `network.address` and
+`network.public_address`. Each peer should list the same trusted peer identities,
+but with addresses that are reachable from its own VPN route table.
+
+Client and CLI configurations should point at a Torii endpoint reachable through
+the VPN or through a controlled internal gateway:
+
+```toml
+torii_url = "http://10.20.0.11:8080"
+```
+
+If Torii must be available outside the VPN, put it behind a reverse proxy or
+load balancer that provides TLS, authentication, rate limiting, and logging.
+Avoid exposing raw peer-to-peer ports or administration endpoints directly to the
+public internet.
+
+## Firewall Rules
+
+Use host and cloud firewall rules even when a VPN is present:
+
+| Service | Recommended access |
+| --- | --- |
+| Peer-to-peer port | Other validator VPN addresses only |
+| Torii | Application backends, operators, or trusted client VPN ranges |
+| Metrics and health checks | Monitoring systems on the operator network |
+| SSH and administration | Bastion host, privileged operator VPN range, or break-glass process |
+| Backups and storage replication | Backup systems on a private network |
+
+Default-deny rules are easier to audit than broad allow rules. When a new peer
+joins the network, update the VPN membership, firewall allow list, and Iroha
+trusted peer configuration as one coordinated change.
+
+## Operational Checklist
+
+- Choose an audited and actively maintained VPN implementation, such as
+ WireGuard, IPsec, or an organization-approved managed private network.
+- Use unique VPN credentials for each host and operator. Do not share VPN keys
+ between validators.
+- Keep VPN credentials separate from Iroha private keys and genesis signing
+ material.
+- Monitor VPN latency, packet loss, reconnects, and route changes. Consensus is
+ sensitive to sustained network instability.
+- Test the effective MTU. Packet fragmentation can look like intermittent peer
+ or Torii failures.
+- Document which VPN ranges are allowed to reach peer-to-peer, Torii, metrics,
+ SSH, and backup endpoints.
+- Rotate VPN credentials when a host, operator account, or organization leaves
+ the network.
+- Avoid a single VPN gateway as the only route between validators. Plan
+ redundant gateways or site-to-site routes for production networks.
+- Include VPN failures in incident response drills so operators know when to
+ distinguish a network partition from an Iroha process failure.
+
+## Related Pages
+
+- [Security Principles](/guide/security/security-principles.md)
+- [Operational Security](/guide/security/operational-security.md)
+- [Keys for Network Deployment](/guide/configure/keys-for-network-deployment.md)
+- [Peer Management](/guide/configure/peer-management.md)
+- [Peer Configuration Reference](/reference/peer-config/index.md)
diff --git a/src/guide/tutorials/index.md b/src/guide/tutorials/index.md
new file mode 100644
index 000000000..912bf0dae
--- /dev/null
+++ b/src/guide/tutorials/index.md
@@ -0,0 +1,49 @@
+# SDK Tutorials
+
+These pages summarize the current Iroha 3 client entry points shipped from the
+main workspace. The SDK surface is evolving quickly, so this section focuses on
+the canonical package names, installation paths, and minimal starting points
+from the upstream repository.
+
+## Recommended Order
+
+1. [Install Iroha 3](/get-started/install-iroha-2.md)
+2. [Launch Iroha 3](/get-started/launch-iroha-2.md)
+3. Pick an SDK:
+ - [Rust](/guide/tutorials/rust.md)
+ - [Python](/guide/tutorials/python.md)
+ - [JavaScript / TypeScript](/guide/tutorials/javascript.md)
+ - [Android, Kotlin, and Java](/guide/tutorials/kotlin-java.md)
+ - [Swift and iOS](/guide/tutorials/swift.md)
+4. Review the [sample apps](/guide/tutorials/sample-apps.md) when you want a
+ complete client application reference.
+5. Use [Embed Kaigi](/guide/tutorials/kaigi.md) when you want to add
+ wallet-backed audio/video meetings to your own app.
+6. Use [Musubi packages](/guide/tutorials/musubi.md) when you need reusable
+ Kotodama source libraries with pinned on-chain registry dependencies.
+
+## Sample Apps
+
+We maintain sample applications for JavaScript desktop, Android, and iOS client
+flows. The JavaScript demo is the most complete external reference. Swift/iOS
+examples exist in the upstream workspace under `examples/ios/`, but their
+checked-in project manifests are currently out of sync with the package API and
+dependency layout. The external mobile point demos are useful mostly for layout
+and historical context.
+
+- [Sample apps overview](/guide/tutorials/sample-apps.md)
+- [Embed Kaigi in a JavaScript app](/guide/tutorials/kaigi.md)
+
+## Source of Truth
+
+All SDK pages here are derived from the current upstream workspace:
+
+- `crates/iroha`
+- `python/iroha_python`
+- `javascript/iroha_js`
+- `java/iroha_android`
+- `IrohaSwift`
+- `crates/musubi`
+
+When in doubt, prefer the README and package metadata in those directories over
+older Iroha 2-era examples.
diff --git a/src/guide/tutorials/javascript.md b/src/guide/tutorials/javascript.md
new file mode 100644
index 000000000..c20899bcb
--- /dev/null
+++ b/src/guide/tutorials/javascript.md
@@ -0,0 +1,116 @@
+# JavaScript and TypeScript
+
+The current JavaScript SDK is published as `@iroha/iroha-js`. It is the
+Node.js-first SDK for Torii, Norito builders, signing, pagination, Connect
+previews, and offline readiness plus QR stream workflows.
+
+## Install
+
+```bash
+npm install @iroha/iroha-js
+npm run build:native
+```
+
+The native build wraps `cargo build -p iroha_js_host` and records the
+platform-specific checksum used at SDK startup. Run it after installing the
+Rust toolchain from the upstream workspace. The package is ESM-only; from
+CommonJS, use dynamic `import()`.
+
+## Working from Source
+
+When using the workspace checkout directly:
+
+```bash
+cd javascript/iroha_js
+npm install
+npm run build:native
+npm run build:dist
+```
+
+Set `IROHA_JS_NATIVE_DIR` only for tests that need to point at an alternate
+`native/` directory. Normal applications should use the packaged native build.
+
+## Quickstart
+
+```js
+import { ToriiClient } from "@iroha/iroha-js/torii";
+import { generateKeyPair } from "@iroha/iroha-js/crypto";
+
+const torii = new ToriiClient("http://127.0.0.1:8080", {
+ authToken: "dev-token",
+});
+
+const keys = generateKeyPair();
+console.log(keys.publicKey);
+```
+
+## Try Taira Read-Only
+
+Use built-in `fetch` in Node.js 18+ to probe Taira before adding signing and
+Norito transaction code:
+
+```js
+const root = "https://taira.sora.org";
+
+const status = await fetch(`${root}/status`).then((res) => res.json());
+console.log({
+ blocks: status.blocks,
+ queueSize: status.queue_size,
+ peers: status.peers,
+});
+
+const domains = await fetch(`${root}/v1/domains?limit=5`).then((res) =>
+ res.json(),
+);
+console.log(domains.items.map((domain) => domain.id));
+
+const assets = await fetch(`${root}/v1/assets/definitions?limit=5`).then((res) =>
+ res.json(),
+);
+for (const asset of assets.items) {
+ console.log(asset.id, asset.name, asset.total_quantity);
+}
+```
+
+Save it as `taira-readonly.mjs`, then run it:
+
+```bash
+node taira-readonly.mjs
+```
+
+Move to signed SDK calls only after these read-only checks work. Public Taira
+can temporarily return a saturated queue or gateway error, so keep live-network
+tests opt-in in CI.
+
+Useful subpath imports:
+
+```js
+import { ToriiClient } from "@iroha/iroha-js/torii";
+import { noritoEncodeInstruction } from "@iroha/iroha-js/norito";
+import { generateKeyPair } from "@iroha/iroha-js/crypto";
+```
+
+Offline QR stream helpers are exported from the package root:
+
+```js
+import { OfflineQrStream } from "@iroha/iroha-js";
+```
+
+For browser-only Connect bootstrap, use `@iroha/iroha-js/connect-browser`
+instead of importing the Node-first `ToriiClient` surface.
+
+## Current Coverage
+
+The SDK focuses on:
+
+- Torii HTTP and WebSocket helpers
+- Norito transaction and instruction builders
+- Ed25519 signing and key generation
+- pagination and retry helpers
+- Connect browser bootstrap helpers
+- Offline V2 readiness and QR stream tooling
+
+## Upstream References
+
+- `javascript/iroha_js/README.md`
+- `javascript/iroha_js/package.json`
diff --git a/src/guide/tutorials/kaigi.md b/src/guide/tutorials/kaigi.md
new file mode 100644
index 000000000..1f57826ac
--- /dev/null
+++ b/src/guide/tutorials/kaigi.md
@@ -0,0 +1,687 @@
+# Embed Kaigi in a JavaScript App
+
+Kaigi lets an application create wallet-backed one-to-one audio/video meetings
+whose lifecycle is recorded through Iroha. The browser still handles media with
+WebRTC, while Torii and the Kaigi instructions provide the durable meeting
+record, encrypted signaling metadata, private roster support, and usage events.
+
+This tutorial shows the minimal integration pattern used by the
+[Iroha Demo JavaScript](https://github.com/soramitsu/iroha-demo-javascript)
+app:
+
+- the renderer creates WebRTC offers and answers
+- an application bridge signs and submits Kaigi transactions
+- compact invite links carry only the call ID and invite secret
+- the host watches Torii for encrypted participant answers
+
+The examples use TypeScript and are written so they can run in Electron, a
+browser with a secure backend, or a web app with a wallet extension. Keep
+private keys outside untrusted renderer code in production.
+
+## Prerequisites
+
+You need:
+
+- a Kaigi-capable Torii endpoint
+- an account for the host and an account for the guest
+- access to each account's signing key through a secure app bridge or wallet
+- browser camera/microphone permissions
+- Node.js 20+ if you are using the JavaScript demo or native
+ `@iroha/iroha-js` binding directly
+
+For a complete working reference, clone the demo beside an Iroha source
+checkout:
+
+```bash
+git clone https://github.com/soramitsu/iroha-demo-javascript.git
+cd iroha-demo-javascript
+npm install
+npm run dev
+```
+
+Use the demo with
+[`@iroha/iroha-js`](https://github.com/hyperledger-iroha/iroha/tree/i23-features/javascript/iroha_js)
+from the Iroha `i23-features` branch. If the native binding changes, rebuild it:
+
+```bash
+(cd node_modules/@iroha/iroha-js && npm run build:native)
+```
+
+Before running a live meeting on TAIRA, check the public Torii surface that the
+demo depends on:
+
+```bash
+TAIRA=https://taira.sora.org
+curl -fsS "$TAIRA/health"
+curl -fsS "$TAIRA/v1/kaigi/relays"
+curl -fsS "$TAIRA/v1/kaigi/relays/health"
+```
+
+These commands verify that TAIRA is live and that Kaigi relay telemetry is
+available. They do not submit Kaigi transactions. A real `CreateKaigi` or
+`JoinKaigi` test needs funded TAIRA accounts and signing through the demo's
+bridge or another wallet-backed bridge.
+
+## Architecture
+
+Keep the Kaigi integration split into three layers:
+
+| Layer | Responsibility |
+| --- | --- |
+| UI | account selection, meeting form, invite link display, media controls |
+| WebRTC | `RTCPeerConnection`, local media, offer and answer descriptions |
+| Iroha bridge | signing, `CreateKaigi`, `JoinKaigi`, `EndKaigi`, signal polling |
+
+The app bridge can be an Electron preload API, a wallet extension, or a backend
+endpoint. It should expose a small surface to the UI:
+
+```ts
+type KaigiMeetingPrivacy = "private" | "transparent";
+type KaigiPeerIdentityReveal = "Hidden" | "RevealAfterJoin";
+
+type KaigiSignalKeyPair = {
+ publicKeyBase64Url: string;
+ privateKeyBase64Url: string;
+};
+
+type KaigiDescription = {
+ type: "offer" | "answer";
+ sdp: string;
+};
+
+type KaigiMeeting = {
+ callId: string;
+ meetingCode: string;
+ title?: string;
+ hostAccountId?: string;
+ hostDisplayName?: string;
+ hostParticipantId?: string;
+ hostKaigiPublicKeyBase64Url: string;
+ scheduledStartMs: number;
+ expiresAtMs: number;
+ live: boolean;
+ ended: boolean;
+ privacyMode: KaigiMeetingPrivacy;
+ peerIdentityReveal: KaigiPeerIdentityReveal;
+ rosterRootHex: string;
+ offerDescription: { type: "offer"; sdp: string };
+};
+
+type KaigiSignal = {
+ entrypointHash: string;
+ callId: string;
+ participantId: string;
+ participantName: string;
+ createdAtMs: number;
+ answerDescription: { type: "answer"; sdp: string };
+};
+
+type KaigiBridge = {
+ generateKaigiSignalKeyPair(): KaigiSignalKeyPair;
+
+ createKaigiMeeting(input: {
+ toriiUrl: string;
+ chainId: string;
+ hostAccountId: string;
+ callId: string;
+ title?: string;
+ scheduledStartMs: number;
+ meetingCode: string;
+ inviteSecretBase64Url: string;
+ hostDisplayName: string;
+ hostParticipantId: string;
+ hostKaigiPublicKeyBase64Url: string;
+ offerDescription: { type: "offer"; sdp: string };
+ privacyMode: KaigiMeetingPrivacy;
+ peerIdentityReveal: KaigiPeerIdentityReveal;
+ }): Promise<{ hash: string }>;
+
+ getKaigiCall(input: {
+ toriiUrl: string;
+ callId: string;
+ inviteSecretBase64Url: string;
+ }): Promise;
+
+ joinKaigiMeeting(input: {
+ toriiUrl: string;
+ chainId: string;
+ participantAccountId: string;
+ callId: string;
+ hostAccountId?: string;
+ hostKaigiPublicKeyBase64Url: string;
+ participantId: string;
+ participantName: string;
+ walletIdentity?: string;
+ roomId: string;
+ privacyMode: KaigiMeetingPrivacy;
+ rosterRootHex: string;
+ answerDescription: { type: "answer"; sdp: string };
+ }): Promise<{ hash: string }>;
+
+ pollKaigiMeetingSignals(input: {
+ toriiUrl: string;
+ accountId: string;
+ callId: string;
+ hostKaigiKeys: KaigiSignalKeyPair;
+ afterTimestampMs?: number;
+ }): Promise;
+
+ watchKaigiCallEvents(
+ input: { toriiUrl: string; callId: string },
+ onEvent: (event: { kind: string; callId: string }) => void | Promise,
+ ): Promise;
+
+ endKaigiMeeting(input: {
+ toriiUrl: string;
+ chainId: string;
+ hostAccountId: string;
+ callId: string;
+ endedAtMs?: number;
+ }): Promise<{ hash: string }>;
+};
+```
+
+In the demo app, these bridge methods are implemented with
+`@iroha/iroha-js`, local signing, encrypted Kaigi metadata, and Torii calls.
+
+## Invite Helpers
+
+Use Torii-compatible call IDs in the `domain.dataspace:meeting` form. The demo
+uses `kaigi.universal:` for generated meetings.
+
+```ts
+const KAIGI_WINDOW_MS = 24 * 60 * 60 * 1000;
+
+const base64Url = (bytes: Uint8Array): string =>
+ btoa(String.fromCharCode(...bytes))
+ .replace(/\+/g, "-")
+ .replace(/\//g, "_")
+ .replace(/=+$/g, "");
+
+export function createInviteSecret(): string {
+ const bytes = new Uint8Array(24);
+ crypto.getRandomValues(bytes);
+ return base64Url(bytes);
+}
+
+export function createMeetingCode(): string {
+ const bytes = new Uint8Array(8);
+ crypto.getRandomValues(bytes);
+ return base64Url(bytes).toLowerCase();
+}
+
+export function buildKaigiCallId(domain: string, meetingCode: string): string {
+ const qualifiedDomain = domain.includes(".") ? domain : `${domain}.universal`;
+ const safeCode = meetingCode
+ .toLowerCase()
+ .replace(/[^a-z0-9-]+/g, "-")
+ .replace(/^-|-$/g, "");
+ return `${qualifiedDomain}:kaigi-${safeCode || "meeting"}`;
+}
+
+export function buildInviteLink(input: {
+ callId: string;
+ inviteSecretBase64Url: string;
+}): string {
+ const call = encodeURIComponent(input.callId);
+ const secret = encodeURIComponent(input.inviteSecretBase64Url);
+ return `iroha://kaigi/join?call=${call}&secret=${secret}`;
+}
+
+export function parseInviteLink(link: string): {
+ callId: string;
+ inviteSecretBase64Url: string;
+} {
+ const url = new URL(link);
+ const callId = url.searchParams.get("call")?.trim();
+ const inviteSecretBase64Url = url.searchParams.get("secret")?.trim();
+ if (!callId || !inviteSecretBase64Url) {
+ throw new Error("Kaigi invite link is missing call or secret.");
+ }
+ return { callId, inviteSecretBase64Url };
+}
+```
+
+## WebRTC Helpers
+
+The host creates an offer, stores it through `CreateKaigi`, and keeps the
+window open so it can apply the guest's answer. The guest fetches the encrypted
+offer, creates an answer, and posts that answer with `JoinKaigi`.
+
+```ts
+const rtcConfig: RTCConfiguration = {
+ iceServers: [{ urls: "stun:stun.l.google.com:19302" }],
+};
+
+export async function openLocalMedia(): Promise {
+ return navigator.mediaDevices.getUserMedia({
+ audio: true,
+ video: {
+ width: { ideal: 1280 },
+ height: { ideal: 720 },
+ frameRate: { ideal: 24, max: 30 },
+ },
+ });
+}
+
+export function createPeer(localStream: MediaStream): RTCPeerConnection {
+ const peer = new RTCPeerConnection(rtcConfig);
+ for (const track of localStream.getTracks()) {
+ peer.addTrack(track, localStream);
+ }
+ return peer;
+}
+
+async function waitForIceGathering(peer: RTCPeerConnection): Promise {
+ if (peer.iceGatheringState === "complete") {
+ return;
+ }
+ await new Promise((resolve) => {
+ const done = () => {
+ if (peer.iceGatheringState === "complete") {
+ peer.removeEventListener("icegatheringstatechange", done);
+ resolve();
+ }
+ };
+ peer.addEventListener("icegatheringstatechange", done);
+ });
+}
+
+export async function createOfferDescription(
+ peer: RTCPeerConnection,
+): Promise<{ type: "offer"; sdp: string }> {
+ const offer = await peer.createOffer();
+ await peer.setLocalDescription(offer);
+ await waitForIceGathering(peer);
+ const local = peer.localDescription;
+ if (!local?.sdp || local.type !== "offer") {
+ throw new Error("WebRTC offer was not created.");
+ }
+ return { type: "offer", sdp: local.sdp };
+}
+
+export async function createAnswerDescription(
+ peer: RTCPeerConnection,
+ offer: { type: "offer"; sdp: string },
+): Promise<{ type: "answer"; sdp: string }> {
+ await peer.setRemoteDescription(offer);
+ const answer = await peer.createAnswer();
+ await peer.setLocalDescription(answer);
+ await waitForIceGathering(peer);
+ const local = peer.localDescription;
+ if (!local?.sdp || local.type !== "answer") {
+ throw new Error("WebRTC answer was not created.");
+ }
+ return { type: "answer", sdp: local.sdp };
+}
+```
+
+Attach the streams to your UI with ordinary video elements:
+
+```ts
+export function attachKaigiMedia(input: {
+ peer: RTCPeerConnection;
+ localStream: MediaStream;
+ localVideo: HTMLVideoElement;
+ remoteVideo: HTMLVideoElement;
+}): void {
+ input.localVideo.srcObject = input.localStream;
+
+ const remoteStream = new MediaStream();
+ input.remoteVideo.srcObject = remoteStream;
+
+ input.peer.addEventListener("track", (event) => {
+ if (event.streams[0]) {
+ input.remoteVideo.srcObject = event.streams[0];
+ return;
+ }
+ remoteStream.addTrack(event.track);
+ });
+}
+```
+
+## Host: Create a Meeting Link
+
+The host flow:
+
+1. open camera and microphone
+2. create a Kaigi signal key pair
+3. create a WebRTC offer
+4. submit `CreateKaigi`
+5. share a compact invite link
+
+```ts
+type AccountContext = {
+ accountId: string;
+ displayName: string;
+};
+
+type KaigiContext = {
+ bridge: KaigiBridge;
+ toriiUrl: string;
+ chainId: string;
+};
+
+export async function hostKaigiMeeting(input: {
+ context: KaigiContext;
+ account: AccountContext;
+ title?: string;
+ privacyMode?: KaigiMeetingPrivacy;
+}): Promise<{
+ callId: string;
+ inviteLink: string;
+ peer: RTCPeerConnection;
+ localStream: MediaStream;
+ hostKaigiKeys: KaigiSignalKeyPair;
+ createdAtMs: number;
+}> {
+ const { bridge, toriiUrl, chainId } = input.context;
+ const privacyMode = input.privacyMode ?? "private";
+ const scheduledStartMs = Date.now();
+ const meetingCode = createMeetingCode();
+ const callId = buildKaigiCallId("kaigi", meetingCode);
+ const inviteSecretBase64Url = createInviteSecret();
+ const hostKaigiKeys = bridge.generateKaigiSignalKeyPair();
+
+ const localStream = await openLocalMedia();
+ const peer = createPeer(localStream);
+ const offerDescription = await createOfferDescription(peer);
+
+ await bridge.createKaigiMeeting({
+ toriiUrl,
+ chainId,
+ hostAccountId: input.account.accountId,
+ callId,
+ title: input.title,
+ scheduledStartMs,
+ meetingCode,
+ inviteSecretBase64Url,
+ hostDisplayName: input.account.displayName,
+ hostParticipantId: "host",
+ hostKaigiPublicKeyBase64Url: hostKaigiKeys.publicKeyBase64Url,
+ offerDescription,
+ privacyMode,
+ peerIdentityReveal: "Hidden",
+ });
+
+ return {
+ callId,
+ inviteLink: buildInviteLink({ callId, inviteSecretBase64Url }),
+ peer,
+ localStream,
+ hostKaigiKeys,
+ createdAtMs: scheduledStartMs,
+ };
+}
+```
+
+Show `inviteLink` in your UI. The user can copy it, open it in another wallet,
+or convert it to an app route such as:
+
+```ts
+export function inviteRoute(inviteLink: string): string {
+ const invite = parseInviteLink(inviteLink);
+ return `/kaigi?call=${encodeURIComponent(invite.callId)}&secret=${encodeURIComponent(
+ invite.inviteSecretBase64Url,
+ )}`;
+}
+```
+
+## Guest: Join a Meeting
+
+The guest flow:
+
+1. parse the invite
+2. fetch the encrypted call offer from Torii
+3. create a WebRTC answer
+4. submit `JoinKaigi` with encrypted answer metadata
+
+```ts
+export async function joinKaigiMeetingFromInvite(input: {
+ context: KaigiContext;
+ account: AccountContext;
+ inviteLink: string;
+}): Promise<{
+ callId: string;
+ peer: RTCPeerConnection;
+ localStream: MediaStream;
+}> {
+ const { bridge, toriiUrl, chainId } = input.context;
+ const { callId, inviteSecretBase64Url } = parseInviteLink(input.inviteLink);
+
+ const meeting = await bridge.getKaigiCall({
+ toriiUrl,
+ callId,
+ inviteSecretBase64Url,
+ });
+
+ if (meeting.ended) {
+ throw new Error("This Kaigi meeting has already ended.");
+ }
+ if (Date.now() > meeting.expiresAtMs) {
+ throw new Error("This Kaigi invite has expired.");
+ }
+
+ const localStream = await openLocalMedia();
+ const peer = createPeer(localStream);
+ const answerDescription = await createAnswerDescription(
+ peer,
+ meeting.offerDescription,
+ );
+
+ await bridge.joinKaigiMeeting({
+ toriiUrl,
+ chainId,
+ participantAccountId: input.account.accountId,
+ callId: meeting.callId,
+ hostAccountId: meeting.hostAccountId,
+ hostKaigiPublicKeyBase64Url: meeting.hostKaigiPublicKeyBase64Url,
+ participantId: "guest",
+ participantName: input.account.displayName,
+ roomId: meeting.callId,
+ privacyMode: meeting.privacyMode,
+ rosterRootHex: meeting.rosterRootHex,
+ answerDescription,
+ });
+
+ return { callId: meeting.callId, peer, localStream };
+}
+```
+
+If the meeting is transparent, you can include a wallet display string in the
+join request. For private meetings, keep `walletIdentity` unset unless the user
+explicitly chooses to reveal it.
+
+## Host: Apply the Guest Answer
+
+After creating a live meeting, the host should watch Kaigi events and poll for
+encrypted answer signals. Apply the first valid answer to the host's peer
+connection.
+
+```ts
+export async function watchForKaigiAnswer(input: {
+ context: KaigiContext;
+ hostAccountId: string;
+ callId: string;
+ hostKaigiKeys: KaigiSignalKeyPair;
+ createdAtMs: number;
+ peer: RTCPeerConnection;
+ onParticipant?: (signal: KaigiSignal) => void;
+}): Promise {
+ const { bridge, toriiUrl } = input.context;
+ const seenSignals = new Set();
+ let lastSignalAtMs = input.createdAtMs;
+
+ const checkSignals = async (): Promise => {
+ const signals = await bridge.pollKaigiMeetingSignals({
+ toriiUrl,
+ accountId: input.hostAccountId,
+ callId: input.callId,
+ hostKaigiKeys: input.hostKaigiKeys,
+ afterTimestampMs: lastSignalAtMs,
+ });
+
+ const next = signals.find(
+ (signal) => !seenSignals.has(signal.entrypointHash),
+ );
+ if (!next) {
+ return false;
+ }
+
+ seenSignals.add(next.entrypointHash);
+ lastSignalAtMs = Math.max(lastSignalAtMs, next.createdAtMs);
+ await input.peer.setRemoteDescription(next.answerDescription);
+ input.onParticipant?.(next);
+ return true;
+ };
+
+ if (await checkSignals()) {
+ return null;
+ }
+
+ return bridge.watchKaigiCallEvents(
+ { toriiUrl, callId: input.callId },
+ async (event) => {
+ if (event.kind !== "ended") {
+ await checkSignals();
+ }
+ },
+ );
+}
+```
+
+Store the returned subscription ID so your UI can stop the watcher when the
+host hangs up or navigates away.
+
+## End the Meeting
+
+End the call from the same host account that created it:
+
+```ts
+export async function endKaigi(input: {
+ context: KaigiContext;
+ hostAccountId: string;
+ callId: string;
+ peer?: RTCPeerConnection;
+ localStream?: MediaStream;
+}): Promise {
+ input.peer?.close();
+ input.localStream?.getTracks().forEach((track) => track.stop());
+
+ await input.context.bridge.endKaigiMeeting({
+ toriiUrl: input.context.toriiUrl,
+ chainId: input.context.chainId,
+ hostAccountId: input.hostAccountId,
+ callId: input.callId,
+ endedAtMs: Date.now(),
+ });
+}
+```
+
+## Private Mode Funding
+
+Private Kaigi create, join, and end operations can require shielded XOR for the
+private entrypoint fee. Your app should catch that error and offer a
+self-shield action before retrying.
+
+```ts
+type PrivateKaigiFundingBridge = KaigiBridge & {
+ getPrivateKaigiConfidentialXorState(input: {
+ toriiUrl: string;
+ accountId: string;
+ }): Promise<{
+ shieldedBalance: string | null;
+ transparentBalance: string;
+ canSelfShield: boolean;
+ message?: string;
+ }>;
+
+ selfShieldPrivateKaigiXor(input: {
+ toriiUrl: string;
+ chainId: string;
+ accountId: string;
+ amount: string;
+ }): Promise<{ hash: string }>;
+};
+
+export async function selfShieldForPrivateKaigi(input: {
+ context: Omit & {
+ bridge: PrivateKaigiFundingBridge;
+ };
+ accountId: string;
+ amount: string;
+}): Promise {
+ const { bridge, toriiUrl, chainId } = input.context;
+ const state = await bridge.getPrivateKaigiConfidentialXorState({
+ toriiUrl,
+ accountId: input.accountId,
+ });
+
+ if (!state.canSelfShield) {
+ throw new Error(
+ state.message || "This account cannot self-shield XOR for private Kaigi.",
+ );
+ }
+
+ await bridge.selfShieldPrivateKaigiXor({
+ toriiUrl,
+ chainId,
+ accountId: input.accountId,
+ amount: input.amount,
+ });
+}
+```
+
+In the demo, the UI prompts the user to self-shield and then retries the
+original create or join action.
+
+## Manual Fallback
+
+Automatic signaling depends on a live wallet, Kaigi-capable Torii routes, and
+proof generation in private mode. Keep a manual fallback for development and
+restricted environments:
+
+- if `CreateKaigi` fails, show a legacy invite containing the offer
+- if `JoinKaigi` fails, show a raw answer packet
+- let the host paste the answer packet and call `setRemoteDescription`
+
+Manual fallback is useful for debugging WebRTC, but it does not provide the
+same private on-chain signaling guarantees as the live Kaigi flow.
+
+## Test Checklist
+
+For unit tests, mock the bridge and assert that your UI passes the expected
+Kaigi payloads:
+
+- host creates local media and submits `createKaigiMeeting`
+- host displays an `iroha://kaigi/join?call=...&secret=...` invite
+- guest parses the invite, calls `getKaigiCall`, and submits
+ `joinKaigiMeeting`
+- host polls or watches for answer signals and applies the answer
+- private mode prompts for self-shielding when shielded XOR is missing
+- manual fallback appears when live signaling is unavailable
+
+For a full reference test suite, see the demo app's Kaigi view and preload
+bridge tests:
+
+```bash
+npm test -- tests/kaigiView.spec.ts tests/preloadKaigiBridge.spec.ts
+npm run e2e:ui
+```
+
+The UI smoke test verifies that the `/kaigi` route renders. A real media test
+still needs two funded wallets plus two windows or devices because transaction
+signing, camera, microphone, and WebRTC permissions vary by runtime.
+
+If you are testing against TAIRA and a call-specific route returns `404`, first
+confirm that the host wallet successfully submitted `CreateKaigi`. Relay health
+endpoints can be available before any particular call exists.
+
+## Next Steps
+
+- Add usage recording with `RecordKaigiUsage` when your app has reliable
+ session duration accounting.
+- Register and monitor relays through `/v1/kaigi/relays` when using relay
+ manifests.
+- Surface `KaigiRosterSummary`, `KaigiUsageSummary`, and
+ `KaigiRelayHealthUpdated` events in your operator dashboard.
diff --git a/src/guide/tutorials/kotlin-java.md b/src/guide/tutorials/kotlin-java.md
new file mode 100644
index 000000000..a1c9fc5f5
--- /dev/null
+++ b/src/guide/tutorials/kotlin-java.md
@@ -0,0 +1,102 @@
+# Android, Kotlin, and Java
+
+The current JVM-facing mobile SDK in the workspace is `IrohaAndroid`. It
+ships Android and JVM artifacts for Kotlin and Java applications.
+
+## Gradle Setup
+
+Point Gradle at the Maven repository that hosts the published artifacts and add
+the dependencies you need:
+
+```kotlin
+repositories {
+ google()
+ mavenCentral()
+ maven { url = uri("../../artifacts/android/maven") }
+}
+
+dependencies {
+ implementation("org.hyperledger.iroha:iroha-android:")
+}
+```
+
+The checked-in Android and JVM publication scripts currently use the
+`iroha-android` artifact ID. There is no separate `iroha-android-jvm` artifact
+ID in the source build.
+
+## Local Sample Build
+
+```bash
+./gradlew -p java/iroha_android :samples-android:assembleDebug \
+ -PirohaAndroidUsePublished=true \
+ -PirohaAndroidRepoDir=$PWD/../artifacts/android/maven
+```
+
+## Quickstart
+
+```java
+import org.hyperledger.iroha.android.address.AccountAddress;
+
+byte[] key = new byte[32];
+AccountAddress address = AccountAddress.fromAccount(key, "ed25519");
+System.out.println(address.canonicalHex());
+System.out.println(address.toI105(753));
+
+AccountAddress.DisplayFormats formats = address.displayFormats();
+System.out.println(formats.i105);
+System.out.println(formats.i105Warning);
+```
+
+## Try Taira Read-Only
+
+For a plain JVM smoke test, use Java's built-in HTTP client before adding SDK
+transaction signing:
+
+```java
+import java.net.URI;
+import java.net.http.HttpClient;
+import java.net.http.HttpRequest;
+import java.net.http.HttpResponse;
+
+public class TairaProbe {
+ public static void main(String[] args) throws Exception {
+ var client = HttpClient.newHttpClient();
+ var request = HttpRequest.newBuilder()
+ .uri(URI.create("https://taira.sora.org/status"))
+ .GET()
+ .build();
+
+ var response = client.send(request, HttpResponse.BodyHandlers.ofString());
+ System.out.println(response.statusCode());
+ System.out.println(response.body());
+ }
+}
+```
+
+Save it as `TairaProbe.java`, then run it with JDK 11 or newer:
+
+```bash
+javac TairaProbe.java
+java TairaProbe
+```
+
+Extend the same pattern to read `https://taira.sora.org/v1/domains?limit=5` or
+`https://taira.sora.org/v1/assets/definitions?limit=5`. Use the Android SDK
+for key handling and signed transactions after the read-only route is reachable.
+
+## Current Coverage
+
+The Android/JVM SDK currently focuses on:
+
+- key management and secure-storage backends
+- Norito encoding via the shared Java implementation
+- Torii HTTP, streaming, and Norito RPC clients
+- offline note, QR, and subscription helpers
+- account address and multisig utilities
+- generated instruction helpers for NFT and RWA flows
+
+## Upstream References
+
+- `java/iroha_android/README.md`
+- `java/iroha_android/build.gradle.kts`
+- `java/iroha_android/samples-android`
diff --git a/src/guide/tutorials/musubi.md b/src/guide/tutorials/musubi.md
new file mode 100644
index 000000000..5cf5c8921
--- /dev/null
+++ b/src/guide/tutorials/musubi.md
@@ -0,0 +1,212 @@
+# Musubi Kotodama Packages
+
+Musubi is the package manager for Kotodama source packages. It gives
+developers a Cargo-like workflow for sharing composable Kotodama functions
+while keeping package identity tied to SORA and Iroha namespaces instead of
+a global first-come name table.
+
+Use Musubi when you need to:
+
+- publish reusable Kotodama source libraries
+- pin exact transitive source dependencies in `Musubi.lock`
+- reconstruct dependency source from verified SoraFS archive commitments
+- connect a package namespace to dapp contract aliases in the same
+ namespace
+- inspect, publish, yank, or alias packages through the on-chain registry
+
+## Package Names
+
+Canonical package ids use:
+
+```text
+namespace/package
+```
+
+Exact release references use:
+
+```text
+namespace/package@version
+```
+
+There is no leading `@` before a namespace. The `@` separator is reserved
+for the version suffix.
+
+The namespace segment matches the suffix used by Kotodama dapp contract
+aliases:
+
+| Package id | Related contract alias shape |
+| ------------------------- | ---------------------------- |
+| `universal/math` | `router::universal` |
+| `dex.universal/swap-core` | `router::dex.universal` |
+
+Namespaces have either `` or `.` form. When a
+package has a dapp link, Musubi checks that every linked contract alias
+uses the same namespace suffix as the package.
+
+## Manifest
+
+A package starts with `Musubi.toml`:
+
+```toml
+[package]
+namespace = "dex.universal"
+name = "swap-core"
+version = "0.1.0"
+
+[dependencies.math]
+package = "std.universal/math"
+version = "^1.0.0"
+
+[exports]
+functions = ["quote"]
+
+[dapp]
+namespace = "dex.universal"
+contracts = ["router::dex.universal"]
+```
+
+Dependencies may use exact versions, caret requirements, tilde
+requirements, wildcards such as `1.*`, or comparator lists such as
+`>=1.0.0,<2.0.0`.
+
+`Musubi.lock` records the selected transitive graph from the on-chain
+registry. Each locked node stores its canonical package ref, selected
+requirement, SoraFS manifest digest, source archive hash, byte count, file
+count, exported functions, deterministic source archive plan, and
+dependency aliases. Short aliases are resolved before they enter the
+lockfile.
+
+## Local Workflow
+
+From the upstream Iroha workspace root, run Musubi through Cargo:
+
+```bash
+cargo run -p musubi -- init --namespace dex.universal --name swap-core --dapp
+cargo run -p musubi -- add std.universal/math --version '^1.0.0' --alias math
+cargo run -p musubi -- install --config client.toml
+cargo run -p musubi -- build src/lib.ko --manifest-out target/lib.contract.json
+cargo run -p musubi -- pack \
+ --car-out source.car \
+ --sorafs-manifest-out manifest.norito \
+ --source-plan-out source-plan.norito
+```
+
+Use `install --offline` to write an unresolved lockfile for exact-version
+dependencies without querying a node. Use `install --locked` in CI to
+reject a stale lockfile.
+
+`build` links cached dependency sources by rewriting calls such as
+`math::add()` to deterministic internal Kotodama function names. It rejects
+calls to functions that the dependency did not export. Musubi v1 libraries
+are function-only: dependency sources that contain state declarations,
+triggers, kotoba blocks, constants, or other non-function contract items
+are rejected.
+
+## Fetching Source Archives
+
+Musubi can fetch missing dependency sources while resolving or later
+through the cache subcommands:
+
+```bash
+cargo run -p musubi -- install --config client.toml --fetch \
+ --provider-payload math.payload
+
+cargo run -p musubi -- cache import math --source-root ../math
+cargo run -p musubi -- cache fetch math --provider-payload math.payload
+```
+
+Live gateway fetches use one or more SoraFS gateway provider specs:
+
+```bash
+cargo run -p musubi -- install --config client.toml --fetch \
+ --gateway-provider 'name=hot-a,provider-id=1111111111111111111111111111111111111111111111111111111111111111,base-url=https://gw.example,stream-token=BASE64,package=math'
+```
+
+Provider payload files and gateway providers are mutually exclusive for one
+fetch operation. If more than one locked package is missing, scope every
+gateway provider with `package=`,
+`package=`, `package=`, or
+`manifest=<64-hex SoraFS manifest digest>`.
+
+Gateway `base-url` and `privacy-url` values must use `https://` by default.
+Local test gateways can use `http://localhost`, `http://127.0.0.1`, or
+`http://[::1]` only with `--gateway-allow-insecure-localhost`. Stream
+tokens are runtime credentials and are not written into `Musubi.lock`.
+
+## Publishing
+
+`pack` computes the deterministic BLAKE3-256 source archive hash plus the
+source byte and file counts. When `--car-out`, `--sorafs-manifest-out`, or
+`--source-plan-out` is supplied, it also builds the deterministic SoraFS
+CAR payload, SoraFS manifest, and Musubi source archive plan from the same
+source file set.
+
+Use a dry run before publishing:
+
+```bash
+cargo run -p musubi -- publish --config client.toml --dry-run
+```
+
+Without `--dry-run`, `publish` writes default artifacts under
+`.musubi/dist////`, optionally uploads the
+manifest and payload through Torii's SoraFS storage-pin endpoint with
+`--upload`, registers the generated SoraFS pin, and submits
+`PublishMusubiRelease` through the configured Iroha client.
+
+Published releases must include:
+
+- a non-empty canonical source archive
+- a deterministic source archive plan
+- at least one exported Kotodama function
+- dependency records that do not select yanked releases
+- a dapp link, when present, whose contract aliases match the package
+ namespace
+
+## Registry Queries and Lifecycle
+
+Search and inspect the registry with:
+
+```bash
+cargo run -p musubi -- search swap --config client.toml
+cargo run -p musubi -- versions dex.universal/swap-core --config client.toml
+cargo run -p musubi -- alias resolve swap --config client.toml
+```
+
+Yanking hides a release from new resolution, but keeps existing lockfiles
+reproducible:
+
+```bash
+cargo run -p musubi -- yank dex.universal/swap-core@0.1.0 \
+ --reason "bad archive" \
+ --config client.toml \
+ --dry-run
+```
+
+Musubi avoids global name squatting by making `namespace/package` the
+canonical package name. Publishing into a namespace must be authorized by
+the same ownership or delegated permission model used for that Kotodama
+dapp namespace. Curated global short aliases are separate from package
+ownership: `SetMusubiShortAlias` requires the `CanSetMusubiShortAlias`
+permission, and the target package must already have at least one active
+release.
+
+## Iroha Surfaces
+
+Musubi uses first-class Iroha instructions and queries:
+
+| Surface | Purpose |
+| ---------------------------- | -------------------------------------------------- |
+| `PublishMusubiRelease` | Publish an immutable package release. |
+| `YankMusubiRelease` | Mark an existing release as yanked. |
+| `SetMusubiShortAlias` | Bind a curated global short alias to a package id. |
+| `AssertMusubiReleaseExists` | Require a concrete package version to exist. |
+| `FindMusubiReleaseByRef` | Fetch a release by exact package reference. |
+| `FindMusubiPackageVersions` | List versions for a package id. |
+| `FindMusubiPackageReleases` | List release summaries for a package id. |
+| `SearchMusubiPackages` | Search package summaries by namespace and text. |
+| `FindMusubiShortAliasByName` | Resolve a curated short alias. |
+
+Torii exposes the Musubi HTTP route family under `/v1/musubi/*`.
+Agent-facing MCP tools are exposed as `iroha.musubi.*` aliases. See
+[Torii endpoints](/reference/torii-endpoints.md) and
+[query reference](/reference/queries.md) for the broader API map.
diff --git a/src/guide/tutorials/python.md b/src/guide/tutorials/python.md
new file mode 100644
index 000000000..9fe2571ec
--- /dev/null
+++ b/src/guide/tutorials/python.md
@@ -0,0 +1,1389 @@
+# Python
+
+The Python SDK in the upstream workspace is `iroha-python`. It targets the
+current Torii and Norito surfaces. Treat it as a fast-moving preview SDK and
+pin the package version or source revision used by your integration.
+
+The read-only examples below were checked against public Taira at
+`https://taira.sora.org`. Mutating examples are transaction templates: they
+require a real Taira authority, private key, gas metadata, and any operator
+tokens required by the target route before they can be submitted.
+
+Use the examples in this order:
+
+| Stage | Run against public Taira? | What you need |
+| --- | --- | --- |
+| Read-only client calls | Yes | Python package plus network access |
+| Local signing and instruction builders | No network call until `submit()` | Native extension and your key material |
+| Mutating transactions and service calls | Only with your own funded account | Authority account, private key, chain ID, fee metadata, fee asset balance, and route tokens |
+| Connect frame codecs, crypto, and GPU helpers | Local only | Native extension; GPU helpers also need a CUDA-capable backend |
+
+## Install
+
+The package metadata name is `iroha-python`. Do not assume an unpinned PyPI
+install matches the live Taira network. Install a wheel or source checkout that
+was built from the same upstream revision your integration targets:
+
+```bash
+python -m pip install /path/to/iroha_python-*.whl
+```
+
+If your project consumes the upstream workspace directly, install the Python
+dependencies and build the native extension before running examples that use
+`Instruction`, `TransactionDraft`, signing, crypto, SoraFS native helpers, GPU
+helpers, or Connect frame codecs. Use the build command from the upstream
+`python/iroha_python/README.md`, then verify that the native exports load:
+
+```bash
+cd python/iroha_python
+python - <<'PY'
+from iroha_python import Instruction, generate_ed25519_keypair
+
+print(Instruction)
+print(generate_ed25519_keypair().public_key.hex())
+PY
+```
+
+If `create_torii_client` imports but `Instruction` or
+`generate_ed25519_keypair` fails, the pure Python package is available but the
+native extension is not.
+
+## Quickstart
+
+Start with public, read-only Taira endpoints:
+
+```python
+from iroha_python import (
+ create_torii_client,
+)
+
+client = create_torii_client("https://taira.sora.org")
+
+# Public reads do not need an authority or private key.
+status = client.request_json("GET", "/status", expected_status=(200,))
+accounts = client.list_accounts_typed(limit=5)
+
+print(status["build"]["version"])
+for account in accounts.items:
+ print(account.id)
+```
+
+## Shared Setup
+
+Use this setup for the mutating templates. Replace every placeholder with a
+Taira authority, private key, token, and asset/account IDs from your deployment
+before submitting.
+
+`authority` is the account that signs the transaction. `private_key` must match
+that account, `CHAIN_ID` must match the target network, and `TX_METADATA` must
+include the fee fields expected by the network. The placeholders below are
+intentionally invalid so they are not submitted by accident.
+
+```python
+from iroha_python import (
+ Ed25519KeyPair,
+ Instruction,
+ TransactionConfig,
+ TransactionDraft,
+ create_torii_client,
+)
+
+TORII_URL = "https://taira.sora.org"
+CHAIN_ID = "taira"
+AUTH_TOKEN = None
+
+# Replace these placeholders with the real signing keys for your accounts.
+alice_pair = Ed25519KeyPair.from_private_key(bytes.fromhex(""))
+bob_pair = Ed25519KeyPair.from_private_key(bytes.fromhex(""))
+
+# The authority string must identify the same account as the private key.
+alice = ""
+bob = ""
+
+ROSE_DEFINITION = "rose#wonderland"
+ROSE_ASSET = ""
+BADGE_NFT = "badge$wonderland"
+
+TX_METADATA = {
+ # Public Taira fee asset. Use the configured XOR asset on your network.
+ "gas_asset_id": "6TEAJqbb8oEPmLncoNiMRbLEK6tw",
+}
+
+client = create_torii_client(TORII_URL, auth_token=AUTH_TOKEN)
+
+
+def submit(*instructions):
+ # This is the network boundary: build, sign, submit, and wait for status.
+ return client.build_and_submit_transaction(
+ chain_id=CHAIN_ID,
+ authority=alice,
+ private_key=alice_pair.private_key,
+ instructions=list(instructions),
+ metadata=TX_METADATA,
+ wait=True,
+ )
+```
+
+`Instruction.*` calls only construct instruction payloads. `submit()` is the
+point where the SDK signs the transaction, sends it to Torii, and waits for a
+status.
+
+## Fees and Gas
+
+Write transactions need fee metadata and a funded fee asset balance. On Taira,
+the fee asset is funded by the public faucet and the transaction metadata must
+include `gas_asset_id`. On Minamoto, fees are paid with real XOR and the asset
+ID comes from that network's configuration.
+
+Fee metadata belongs on the transaction, not on individual instructions. The
+`submit()` helper above attaches `TX_METADATA` to every transaction it builds:
+
+```python
+TX_METADATA = {
+ # Taira expects the fee asset definition in transaction metadata.
+ "gas_asset_id": "6TEAJqbb8oEPmLncoNiMRbLEK6tw",
+}
+
+envelope, status = client.build_and_submit_transaction(
+ chain_id=CHAIN_ID,
+ authority=alice,
+ private_key=alice_pair.private_key,
+ # Fee metadata is attached to the transaction, not the instruction.
+ instructions=[Instruction.register_domain("wonderland")],
+ metadata=TX_METADATA,
+ wait=True,
+)
+```
+
+Before sending writes, make sure the authority account owns enough of the fee
+asset. The exact faucet and asset ID are network-specific; this is the Taira
+shape:
+
+```python
+FEE_ASSET_DEFINITION = "6TEAJqbb8oEPmLncoNiMRbLEK6tw"
+# The faucet returns the concrete account asset ID to check here.
+FEE_ASSET_ID = ""
+TX_METADATA = {"gas_asset_id": FEE_ASSET_DEFINITION}
+
+# Fail before submitting if the signer cannot pay gas.
+fee_assets = client.list_account_assets_typed(
+ alice,
+ limit=10,
+ asset_id=FEE_ASSET_ID,
+)
+if not fee_assets.items:
+ raise RuntimeError("fund the authority account with the Taira fee asset first")
+```
+
+The faucet returns the concrete `asset_id` to use for the balance check. The
+`gas_asset_id` metadata field uses the fee asset definition ID.
+
+Keep application metadata separate from fee metadata by merging the mappings
+when you build a transaction:
+
+```python
+APP_METADATA = {"source": "python-docs"}
+# Merge app metadata with required fee metadata before building the draft.
+metadata = {**TX_METADATA, **APP_METADATA}
+
+draft = TransactionDraft(
+ TransactionConfig(
+ chain_id=CHAIN_ID,
+ authority=alice,
+ metadata=metadata,
+ )
+)
+```
+
+If you omit fee metadata, use the wrong fee asset, or sign with an unfunded
+account, a real network should reject the transaction even if the instruction
+payload is otherwise valid.
+
+## Taira-Checked Read-Only Calls
+
+These calls returned successfully against public Taira:
+
+```python
+client = create_torii_client("https://taira.sora.org")
+
+# Use raw requests for endpoints that do not need a typed wrapper.
+status = client.request_json("GET", "/status", expected_status=(200,))
+parameters = client.request_json("GET", "/v1/parameters", expected_status=(200,))
+
+# Typed helpers parse pagination and records into dataclasses.
+accounts = client.list_accounts_typed(limit=1)
+domains = client.list_domains_typed(limit=1)
+definitions = client.query_asset_definitions_typed(limit=1)
+
+# These calls inspect live node subsystems without mutating state.
+time_now = client.get_time_now_typed()
+time_status = client.get_time_status_typed()
+sumeragi = client.get_sumeragi_status_typed()
+connect = client.get_connect_status_typed()
+
+print(status["build"]["version"])
+print(parameters["sumeragi"]["block_time_ms"])
+print(accounts.total, domains.total, definitions.total)
+print(time_now.now_ms, len(time_status.samples), sumeragi.leader_index)
+print(connect.enabled, connect.sessions_active)
+```
+
+Routes such as `/v1/status`, public peer inventory, Sumeragi RBC sampling, node
+admin snapshots, and Connect app registry administration were not publicly
+available on Taira during the check. Use `request_json("GET", "/status")` for
+the public node status payload on Taira.
+
+## Instruction Builders
+
+The SDK exposes typed builders for the most common instruction families and a
+JSON escape hatch for variants that are not first-class Python methods yet.
+The following snippets are mutating transaction templates and were not
+submitted to public Taira without a signing account.
+
+Prefer typed helpers when they exist: they normalize Python values and fail
+early on invalid shapes. Use `Instruction.from_json` only when you need an
+instruction variant that does not have a Python helper yet.
+
+| Instruction family | Python surface |
+| --- | --- |
+| Register | `register_domain`, `register_account`, `register_asset_definition_numeric`, `register_rwa`, `register_time_trigger`, `register_precommit_trigger` |
+| Unregister | `unregister_trigger`; use `Instruction.from_json` for other variants |
+| Mint/Burn | `mint_asset_numeric`, `burn_asset_numeric`, `mint_trigger_repetitions`, `burn_trigger_repetitions` |
+| Transfer | `transfer_asset_numeric`, `transfer_domain`, `transfer_asset_definition`, `transfer_nft`, `transfer_rwa`, `force_transfer_rwa` |
+| Metadata and controls | `set_account_key_value`, `remove_account_key_value`, `set_rwa_controls`, `set_rwa_key_value`, `remove_rwa_key_value` |
+| RWA lifecycle | `merge_rwas`, `redeem_rwa`, `freeze_rwa`, `unfreeze_rwa`, `hold_rwa`, `release_rwa` |
+| ExecuteTrigger | `execute_trigger` |
+| Repo/settlement extensions | `repo_initiate`, `repo_unwind`, `repo_margin_call`, `settlement_dvp`, `settlement_pvp` |
+| Grant/Revoke, SetParameter, Log, Custom, Upgrade, and less common register/unregister variants | `Instruction.from_json` or `TransactionBuilder.add_instruction_json` with canonical `InstructionBox` JSON |
+
+### Register Domains, Accounts, and Assets
+
+Registration examples assume the signer has permission to create objects in
+the target domain. On a shared network such as Taira, use a domain and account
+namespace assigned to you.
+
+```python
+# Submit related registrations together when they share one authority.
+submit(
+ Instruction.register_domain("wonderland", {"environment": "dev"}),
+ Instruction.register_account(alice, {"display_name": "Alice"}),
+ Instruction.register_account(bob, {"display_name": "Bob"}),
+ Instruction.register_asset_definition_numeric(
+ ROSE_DEFINITION,
+ owner=alice,
+ scale=2,
+ mintable="Infinitely",
+ confidential_policy="TransparentOnly",
+ metadata={"symbol": "ROS"},
+ ),
+)
+```
+
+`mintable` accepts `Infinitely`, `Once`, `Not`, or `Limited(n)` values accepted
+by the data model. Omit `scale` for an unconstrained numeric asset.
+
+### Mint, Burn, and Transfer Assets
+
+These calls use an existing asset ID. Register the asset definition first, then
+build the concrete asset ID for the account that owns the asset.
+
+```python
+# Increase the account's asset balance.
+submit(Instruction.mint_asset_numeric(ROSE_ASSET, "100.00"))
+
+# Move part of the balance to another account.
+submit(Instruction.transfer_asset_numeric(ROSE_ASSET, "25.50", bob))
+
+# Decrease the remaining balance.
+submit(Instruction.burn_asset_numeric(ROSE_ASSET, "10.00"))
+```
+
+### Transfer Ownership
+
+Ownership transfers change who controls the domain, asset definition, or NFT.
+Use the current owner as the transaction authority.
+
+```python
+# The first argument is the current owner; the last is the new owner.
+submit(Instruction.transfer_domain(alice, "wonderland", bob))
+submit(Instruction.transfer_asset_definition(alice, ROSE_DEFINITION, bob))
+submit(Instruction.transfer_nft(alice, BADGE_NFT, bob))
+```
+
+### Set and Remove Metadata
+
+Metadata values must be JSON-serializable. When you use `TransactionDraft`, the
+authority in `TransactionConfig` becomes the default target account.
+
+```python
+# Values are encoded as JSON metadata under the target account.
+submit(
+ Instruction.set_account_key_value(
+ alice,
+ "profile",
+ {"display_name": "Alice", "tier": "operator"},
+ )
+)
+
+# Removing the key deletes the metadata entry from the account.
+submit(Instruction.remove_account_key_value(alice, "profile"))
+```
+
+The high-level draft helper targets the transaction authority by default:
+
+```python
+draft = TransactionDraft(
+ TransactionConfig(chain_id=CHAIN_ID, authority=alice, metadata=TX_METADATA)
+)
+# With a draft, account metadata methods default to the draft authority.
+draft.set_account_key_value("nickname", "Queen Alice")
+draft.remove_account_key_value("nickname")
+```
+
+### Real-World Assets
+
+RWA helpers use JSON-serializable payloads for asset-specific metadata,
+provenance, and controller policy. `register_rwa` does not accept an `id` or
+`owner`: the runtime generates the `RwaId`, and the transaction authority
+becomes the initial owner.
+
+```python
+draft = TransactionDraft(
+ TransactionConfig(chain_id=CHAIN_ID, authority=alice, metadata=TX_METADATA)
+)
+
+# Register the lot in a domain. Store business identifiers in primary_reference
+# or metadata, then query the generated RWA ID after the transaction commits.
+draft.register_rwa(
+ {
+ "domain": "commodities.universal",
+ "quantity": "100",
+ "spec": {"scale": 0},
+ "primary_reference": "warehouse-receipt-001",
+ "status": "active",
+ "metadata": {
+ "commodity": "copper",
+ "warehouse": "DXB-01",
+ },
+ "parents": [],
+ "controls": {
+ "controller_accounts": [alice],
+ "controller_roles": [],
+ "freeze_enabled": True,
+ "hold_enabled": True,
+ "force_transfer_enabled": True,
+ "redeem_enabled": True,
+ },
+ }
+)
+```
+
+After the registration transaction commits, use `FindRwas`, `/v1/rwas`, an RWA
+event, or the explorer route set to discover the generated ID:
+
+```python
+page = client.list_rwas_typed(limit=20, offset=0)
+
+for lot in page.items:
+ print(lot.id)
+```
+
+Subsequent operations use the generated `hash$domain` ID:
+
+```python
+registered_rwa_id = (
+ "0123456789abcdef0123456789abcdef"
+ "0123456789abcdef0123456789abcdef$commodities.universal"
+)
+
+draft = TransactionDraft(
+ TransactionConfig(chain_id=CHAIN_ID, authority=alice, metadata=TX_METADATA)
+)
+
+# Transfer, hold, release, freeze, and redeem model the lot lifecycle.
+draft.transfer_rwa(
+ registered_rwa_id,
+ quantity="10",
+ destination=bob,
+)
+draft.hold_rwa(registered_rwa_id, quantity="5")
+draft.release_rwa(registered_rwa_id, quantity="5")
+draft.freeze_rwa(registered_rwa_id)
+draft.unfreeze_rwa(registered_rwa_id)
+draft.redeem_rwa(registered_rwa_id, quantity="1")
+
+# RWA metadata and controls are separate from account metadata.
+draft.set_rwa_key_value(registered_rwa_id, "auditor", "alice")
+draft.remove_rwa_key_value(registered_rwa_id, "auditor")
+draft.set_rwa_controls(
+ registered_rwa_id,
+ {
+ "controller_accounts": [alice],
+ "controller_roles": [],
+ "freeze_enabled": True,
+ "hold_enabled": True,
+ "force_transfer_enabled": True,
+ "redeem_enabled": True,
+ },
+)
+
+# Merge consumes quantities from parent lots with the same domain and spec. The
+# child lot gets a generated ID.
+draft.merge_rwas(
+ {
+ "parents": [
+ {"rwa": registered_rwa_id, "quantity": "40"},
+ {
+ "rwa": "fedcba9876543210fedcba9876543210"
+ "fedcba9876543210fedcba9876543210$commodities.universal",
+ "quantity": "60",
+ },
+ ],
+ "primary_reference": "warehouse-receipt-003",
+ "status": "merged",
+ "metadata": {"merge_reason": "same custodian and quality grade"},
+ }
+)
+
+# Force transfer requires a configured controller and force_transfer_enabled.
+draft.force_transfer_rwa(
+ registered_rwa_id,
+ quantity="1",
+ destination=bob,
+)
+```
+
+Full transfers can change `owned_by` on the existing lot. Partial transfers and
+merges create generated child lots.
+
+### Triggers
+
+Use trigger registration helpers when the executable is another instruction
+sequence:
+
+```python
+# The trigger executable is just another instruction payload.
+reward = Instruction.mint_asset_numeric(ROSE_ASSET, "1")
+
+# Time triggers run on a schedule once registered.
+register_hourly = Instruction.register_time_trigger(
+ "hourly_reward",
+ alice,
+ [reward],
+ start_ms=1_800_000_000_000,
+ period_ms=3_600_000,
+ repeats=24,
+ metadata={"purpose": "docs"},
+)
+submit(register_hourly)
+
+# Precommit triggers run during the transaction pipeline.
+register_precommit = Instruction.register_precommit_trigger(
+ "precommit_reward",
+ alice,
+ [reward],
+ repeats=10,
+ metadata={"purpose": "pipeline test"},
+)
+submit(register_precommit)
+
+# Trigger execution and repetition changes are also transactions.
+submit(Instruction.execute_trigger("hourly_reward", args={"reason": "manual"}))
+submit(Instruction.mint_trigger_repetitions("hourly_reward", 5))
+submit(Instruction.burn_trigger_repetitions("hourly_reward", 1))
+submit(Instruction.unregister_trigger("hourly_reward"))
+```
+
+Torii also exposes REST helpers for trigger inventory:
+
+```python
+# Inventory helpers are reads; they do not unregister or execute triggers.
+registered = client.list_triggers_typed(limit=20)
+for trigger in registered.items:
+ print(trigger.id, trigger.authority)
+
+details = client.get_trigger_typed("precommit_reward")
+```
+
+Trigger inventory calls only read or inspect trigger records. Registration,
+execution, repetition changes, and unregistering are mutating operations.
+
+### Repo and Settlement Instructions
+
+Repo and bilateral-settlement helpers append domain-specific instruction
+variants without hand-crafting Norito payloads:
+
+```python
+from iroha_python import (
+ RepoCashLeg,
+ RepoCollateralLeg,
+ RepoGovernance,
+ SettlementAtomicity,
+ SettlementExecutionOrder,
+ SettlementLeg,
+ SettlementPlan,
+)
+
+config = TransactionConfig(
+ chain_id=CHAIN_ID,
+ authority=alice,
+ # Keep repo and settlement examples bounded by a short TTL.
+ ttl_ms=120_000,
+ metadata=TX_METADATA,
+)
+draft = TransactionDraft(config)
+
+# Each repo leg describes one side of the financing agreement.
+cash = RepoCashLeg(asset_definition_id="usd#wonderland", quantity="1000")
+collateral = RepoCollateralLeg(
+ asset_definition_id="bond#wonderland",
+ quantity="1050",
+ metadata={"isin": "ABC123"},
+)
+governance = RepoGovernance(haircut_bps=1500, margin_frequency_secs=86_400)
+
+# Domain-specific draft methods append the corresponding instructions.
+draft.repo_initiate(
+ agreement_id="daily_repo",
+ initiator=alice,
+ counterparty=bob,
+ cash_leg=cash,
+ collateral_leg=collateral,
+ rate_bps=250,
+ maturity_timestamp_ms=1_704_000_000_000,
+ governance=governance,
+)
+draft.repo_margin_call("daily_repo")
+draft.repo_unwind(
+ agreement_id="daily_repo",
+ initiator=alice,
+ counterparty=bob,
+ cash_leg=cash,
+ collateral_leg=collateral,
+ settlement_timestamp_ms=1_704_086_400_000,
+)
+
+# DVP/PVP settlement plans encode ordering and atomicity for both legs.
+delivery = SettlementLeg(
+ asset_definition_id="bond#wonderland",
+ quantity="10",
+ from_account=alice,
+ to_account=bob,
+ metadata={"isin": "ABC123"},
+)
+payment = SettlementLeg(
+ asset_definition_id="usd#wonderland",
+ quantity="1000",
+ from_account=bob,
+ to_account=alice,
+)
+plan = SettlementPlan(
+ order=SettlementExecutionOrder.PAYMENT_THEN_DELIVERY,
+ atomicity=SettlementAtomicity.ALL_OR_NOTHING,
+)
+
+draft.settlement_dvp(
+ settlement_id="trade_dvp",
+ delivery_leg=delivery,
+ payment_leg=payment,
+ plan=plan,
+ metadata={"desk": "rates"},
+)
+draft.settlement_pvp(
+ settlement_id="trade_pvp",
+ primary_leg=payment,
+ counter_leg=delivery,
+)
+
+envelope = draft.sign_with_keypair(alice_pair)
+client.submit_transaction_envelope_and_wait(envelope)
+```
+
+### JSON Escape Hatch
+
+When a Python helper is not available yet, feed canonical data-model
+`InstructionBox` JSON into `Instruction.from_json` or directly into
+`TransactionBuilder.add_instruction_json`. This is the recommended path for
+`Grant`, `Revoke`, `SetParameter`, `Log`, `Custom`, `Upgrade`, peer/role/NFT
+registration, and non-trigger unregister variants until those helpers are
+typed.
+
+```python
+from iroha_python import Instruction, TransactionBuilder
+
+# Copy this payload from Rust/CLI tooling or from a pinned data-model schema.
+instruction_box_json = """
+{
+ "": {
+ "...": "..."
+ }
+}
+"""
+
+instruction = Instruction.from_json(instruction_box_json)
+submit(instruction)
+
+# Use TransactionBuilder when you need lower-level control than TransactionDraft.
+builder = TransactionBuilder(CHAIN_ID, alice)
+builder.set_metadata(TX_METADATA)
+builder.add_instruction_json(instruction_box_json)
+envelope = builder.sign(alice_pair.private_key)
+client.submit_transaction_envelope_and_wait(envelope)
+```
+
+For generated or opaque instructions, round-trip through JSON before storing
+fixtures:
+
+```python
+# Round trips are useful for validating fixtures generated by another tool.
+payload = Instruction.mint_asset_numeric(ROSE_ASSET, "1").to_json()
+same_instruction = Instruction.from_json(payload)
+print(same_instruction.as_dict())
+```
+
+## Transaction Workflows
+
+Use `TransactionDraft` for applications that build multiple instructions before
+signing. A draft lets you keep transaction-level settings such as `ttl_ms`,
+`nonce`, and metadata in one place, then sign once:
+
+```python
+config = TransactionConfig(
+ chain_id=CHAIN_ID,
+ authority=alice,
+ # TTL and nonce are transaction-level properties shared by all instructions.
+ ttl_ms=120_000,
+ nonce=1,
+ metadata={**TX_METADATA, "source": "python-docs"},
+)
+
+draft = TransactionDraft(config)
+# Draft methods append instructions but do not submit anything yet.
+draft.register_domain("wonderland", metadata={"owner": "docs"})
+draft.register_account(bob, metadata={"role": "user"})
+draft.register_asset_definition_numeric(
+ ROSE_DEFINITION,
+ owner=alice,
+ scale=2,
+ mintable="Infinitely",
+)
+draft.mint_asset_numeric(ROSE_ASSET, "100")
+draft.transfer_asset_numeric(ROSE_ASSET, "25", destination=bob)
+
+# Signing freezes the draft into an envelope ready for Torii.
+envelope = draft.sign_with_keypair(alice_pair)
+receipt = client.submit_transaction_envelope(envelope)
+status = client.wait_for_transaction_status(envelope.hash_hex(), timeout=30)
+print(receipt, status)
+```
+
+Export a deterministic manifest for review, auditing, or wallet handoff:
+
+```python
+import json
+from pathlib import Path
+
+# Manifests are review artifacts; they are not submitted by themselves.
+manifest = draft.to_manifest_dict(include_creation_time=True)
+print(json.dumps(manifest, indent=2))
+
+Path("transaction_manifest.json").write_text(
+ draft.to_manifest_json(indent=2, include_creation_time=True),
+ encoding="utf-8",
+)
+```
+
+Attach a lane privacy proof before signing when the target lane requires it:
+
+```python
+# Attach the proof before signing so it is covered by the transaction hash.
+draft.add_lane_privacy_merkle_proof(
+ commitment_id=7,
+ leaf=bytes.fromhex("aa" * 32),
+ leaf_index=3,
+ audit_path=[bytes.fromhex("bb" * 32), None, bytes.fromhex("cc" * 32)],
+ proof_backend="halo2/ipa",
+ proof_bytes=b"...proof bytes...",
+ verifying_key_bytes=b"...verifying key bytes...",
+)
+envelope = draft.sign_with_keypair(alice_pair)
+```
+
+## Queries
+
+Typed query helpers return dataclasses instead of raw JSON dictionaries. They
+are the easiest way to start because the SDK parses pagination and common
+record fields for you:
+
+```python
+# Typed pages expose `.items` plus pagination metadata such as `.total`.
+accounts = client.list_accounts_typed(limit=25, sort="id")
+for account in accounts.items:
+ print(account.id, account.metadata)
+
+domains = client.list_domains_typed(limit=10)
+definitions = client.query_asset_definitions_typed(limit=10)
+print(domains.total, definitions.total)
+```
+
+Use the generic request helpers when a Torii endpoint does not yet have a typed
+wrapper:
+
+```python
+# Drop to raw JSON when you need an endpoint before a typed helper exists.
+payload = client.request_json("GET", "/v1/parameters", expected_status=(200,))
+metrics = client.get_metrics(as_text=True)
+```
+
+Account inventory helpers require an account identifier accepted by the SDK's
+normalizer. Use canonical I105 account IDs or on-chain aliases; if a block
+explorer or raw endpoint returns an ID that the SDK rejects, resolve it to a
+canonical account ID before calling these helpers:
+
+```python
+# These helpers expect a canonical account ID or an alias the SDK can normalize.
+assets = client.list_account_assets_typed(alice, limit=10)
+transactions = client.query_account_transactions_typed(alice, limit=5)
+permissions = client.list_account_permissions_typed(alice, limit=20)
+
+print(len(assets.items), len(transactions.items), len(permissions.items))
+```
+
+## Events
+
+Streaming helpers decode JSON payloads by default. Pass `with_metadata=True`
+when you need the SSE event name, id, retry hint, and raw payload. Pair streams
+with `EventCursor` to persist the latest event id. These examples wait for live
+events, so run them against a node where the corresponding event stream is
+enabled and active.
+
+```python
+from iroha_python import DataEventFilter, EventCursor
+
+# Narrow the stream to proof events with the expected backend and proof hash.
+proof_filter = DataEventFilter.proof(
+ backend="halo2/ipa",
+ proof_hash_hex="deadbeef" * 8,
+)
+
+# Persist the latest SSE id so a reconnect can resume from the same point.
+cursor = EventCursor()
+for event in client.stream_events(
+ filter=proof_filter,
+ cursor=cursor,
+ resume=True,
+ with_metadata=True,
+):
+ print(event.id, event.event, event.data)
+ break
+
+for event in client.stream_trigger_events(trigger_id="hourly_reward", resume=True):
+ print(event)
+ break
+
+for tx_event in client.stream_pipeline_transactions(status="Queued"):
+ print(tx_event)
+ break
+```
+
+## Keys and Addresses
+
+The SDK exposes local signing helpers for every signature algorithm compiled
+into the native extension. These helpers do not call Taira, but they do require
+the native extension:
+
+```python
+from iroha_python import (
+ ED25519_ALGORITHM,
+ derive_confidential_keyset_from_hex,
+ derive_keypair_from_seed,
+ hash_blake2b_32,
+ verify,
+)
+from iroha_python.address import AccountAddress
+
+# Key derivation and signing are local; no network call is made here.
+ed_pair = derive_keypair_from_seed(b"alice", ED25519_ALGORITHM)
+signature = ed_pair.sign(b"payload")
+assert verify(ED25519_ALGORITHM, ed_pair.public_key, b"payload", signature)
+
+# Account addresses combine a domain and public key into canonical I105 form.
+address = AccountAddress.from_account(domain="wonderland", public_key=ed_pair.public_key)
+print(address.canonical_hex())
+print(address.to_i105(0x02F1))
+
+# Confidential key helpers derive local viewing/spending material.
+confidential = derive_confidential_keyset_from_hex("01" * 32)
+print(confidential.as_hex())
+print(hash_blake2b_32(b"payload").hex())
+```
+
+Use `supported_crypto_algorithms()` to see what your wheel supports. The
+generic helpers use canonical algorithm labels and work for Ed25519,
+secp256k1, ML-DSA, GOST, BLS, and SM2 when those algorithms are compiled in:
+
+```python
+from iroha_python import (
+ CryptoKeyPair,
+ derive_keypair_from_seed,
+ load_keypair,
+ parse_private_key_multihash,
+ parse_public_key_multihash,
+ private_key_multihash,
+ public_key_multihash,
+ sign,
+ supported_crypto_algorithms,
+ verify,
+)
+
+message = b"iroha multi-algorithm signing"
+
+# Iterate the algorithms compiled into the installed native extension.
+for algorithm in supported_crypto_algorithms():
+ keypair = derive_keypair_from_seed(f"docs:{algorithm}".encode(), algorithm)
+ signature = keypair.sign(message)
+
+ # Both the object method and the generic helper verify the same signature.
+ assert keypair.verify(message, signature)
+ assert verify(algorithm, keypair.public_key, message, signature)
+
+ # Loading a private key should reconstruct the same public key.
+ loaded = load_keypair(keypair.private_key, algorithm)
+ assert loaded.public_key == keypair.public_key
+ assert sign(algorithm, loaded.private_key, message) != b""
+
+ # Prefixed multihashes carry the algorithm label with the key bytes.
+ public_multihash = public_key_multihash(
+ algorithm,
+ keypair.public_key,
+ prefixed=True,
+ )
+ private_multihash = private_key_multihash(
+ algorithm,
+ keypair.private_key,
+ prefixed=True,
+ )
+
+ public_algorithm, public_key = parse_public_key_multihash(public_multihash)
+ private_algorithm, private_key = parse_private_key_multihash(private_multihash)
+ restored = CryptoKeyPair.from_private_key_multihash(private_multihash)
+
+ # Round-trip checks catch mismatched algorithm labels or key encodings.
+ assert public_algorithm == algorithm
+ assert public_key == keypair.public_key
+ assert private_algorithm == algorithm
+ assert private_key == keypair.private_key
+ assert restored == keypair
+```
+
+### Chinese SM Cryptography
+
+The Python SDK exposes both generic SM2 helpers and SM2-specific convenience
+helpers. Use the node capability advert to pick the SM2 distinguishing
+identifier expected by the target network:
+
+```python
+from iroha_python import (
+ SM2_ALGORITHM,
+ SM2_DEFAULT_DISTINGUISHED_ID,
+ derive_keypair_from_seed,
+ derive_sm2_keypair_from_seed,
+ sign,
+ sign_sm2,
+ verify,
+ verify_sm2,
+)
+
+capabilities = client.get_node_capabilities_typed()
+sm = capabilities.crypto.sm if capabilities.crypto else None
+# Use the node's default SM2 distinguishing ID when the node advertises one.
+distid = sm.sm2_distid_default if sm else SM2_DEFAULT_DISTINGUISHED_ID
+
+# The SM2-specific helper accepts the distinguishing ID explicitly.
+pair = derive_sm2_keypair_from_seed(bytes.fromhex("11" * 32), distid=distid)
+message = b"iroha-sm2-example"
+signature = pair.sign(message)
+
+assert pair.verify(message, signature)
+assert verify_sm2(pair.public_key, message, signature, distid=distid)
+assert sign_sm2(pair.private_key, message, distid=distid) != b""
+
+# The generic API works when you only need the canonical `sm2` label.
+generic_pair = derive_keypair_from_seed(bytes.fromhex("22" * 32), SM2_ALGORITHM)
+generic_signature = sign(SM2_ALGORITHM, generic_pair.private_key, message)
+assert verify(SM2_ALGORITHM, generic_pair.public_key, message, generic_signature)
+
+print(pair.public_key_sec1_hex)
+print(pair.public_key_multihash)
+```
+
+`crypto.sm.enabled` tells you whether the node accepts SM-family algorithms in
+its current policy. The same advert includes the SM hash policy and acceleration
+status, which is useful when deciding whether to enable SM2-specific flows:
+
+```python
+capabilities = client.get_node_capabilities_typed()
+
+# `enabled` is the submit-time policy flag, not just local SDK support.
+if capabilities.crypto and capabilities.crypto.sm.enabled:
+ sm = capabilities.crypto.sm
+ print(sm.default_hash)
+ print(sm.allowed_signing)
+ print(sm.acceleration.policy)
+else:
+ print("SM crypto is not enabled by this node")
+```
+
+Public Taira exposed the SM capability advert during the check, but SM signing
+was disabled there. Its advertised signing algorithms were `ed25519`,
+`secp256k1`, and `bls_normal`, so do not submit SM2-signed transactions to that
+deployment unless the capability payload changes.
+
+### GOST and Post-Quantum Keys
+
+Use the generic crypto API for GOST R 34.10-2012 parameter sets and ML-DSA
+(`ml-dsa`) post-quantum signatures. The same key-pair object handles signing,
+verification, and multihash export:
+
+```python
+from iroha_python import (
+ GOST_3410_2012_256_PARAMSET_A_ALGORITHM,
+ GOST_3410_2012_256_PARAMSET_B_ALGORITHM,
+ GOST_3410_2012_256_PARAMSET_C_ALGORITHM,
+ GOST_3410_2012_512_PARAMSET_A_ALGORITHM,
+ GOST_3410_2012_512_PARAMSET_B_ALGORITHM,
+ ML_DSA_ALGORITHM,
+ derive_keypair_from_seed,
+ verify,
+)
+from iroha_python.address import AccountAddress
+
+CHAIN_DISCRIMINANT = 0x02F1
+message = b"iroha gost and post-quantum example"
+
+# Crypto helpers use canonical labels; account addresses use compact aliases.
+GOST_ADDRESS_ALIASES = {
+ GOST_3410_2012_256_PARAMSET_A_ALGORITHM: "gost-256-a",
+ GOST_3410_2012_256_PARAMSET_B_ALGORITHM: "gost-256-b",
+ GOST_3410_2012_256_PARAMSET_C_ALGORITHM: "gost-256-c",
+ GOST_3410_2012_512_PARAMSET_A_ALGORITHM: "gost-512-a",
+ GOST_3410_2012_512_PARAMSET_B_ALGORITHM: "gost-512-b",
+}
+
+# Derive and verify one local keypair for every GOST parameter set.
+for crypto_algorithm, address_algorithm in GOST_ADDRESS_ALIASES.items():
+ keypair = derive_keypair_from_seed(
+ f"docs:{crypto_algorithm}".encode(),
+ crypto_algorithm,
+ )
+ signature = keypair.sign(message)
+
+ assert verify(crypto_algorithm, keypair.public_key, message, signature)
+
+ address = AccountAddress.from_account(
+ domain="wonderland",
+ public_key=keypair.public_key,
+ # Account addresses use compact curve aliases for GOST parameter sets.
+ algorithm=address_algorithm,
+ )
+ print(crypto_algorithm)
+ print(address.canonical_hex())
+ print(address.to_i105(CHAIN_DISCRIMINANT))
+ print(keypair.prefixed_public_key_multihash)
+
+# ML-DSA follows the same generic signing and address flow.
+mldsa_keypair = derive_keypair_from_seed(b"docs:ml-dsa", ML_DSA_ALGORITHM)
+mldsa_signature = mldsa_keypair.sign(message)
+assert verify(ML_DSA_ALGORITHM, mldsa_keypair.public_key, message, mldsa_signature)
+post_quantum_address = AccountAddress.from_account(
+ domain="wonderland",
+ public_key=mldsa_keypair.public_key,
+ algorithm="ml-dsa",
+)
+print(post_quantum_address.canonical_hex())
+print(post_quantum_address.to_i105(CHAIN_DISCRIMINANT))
+print(mldsa_keypair.prefixed_public_key_multihash)
+```
+
+Gate GOST and post-quantum flows on the node's advertised signing algorithms.
+Use the raw capability payload for forward-compatible algorithm names:
+
+```python
+capabilities = client.request_json(
+ "GET",
+ "/v1/node/capabilities",
+ expected_status=(200,),
+)
+crypto = capabilities.get("crypto", {})
+sm = crypto.get("sm", {})
+# Nodes advertise the signing algorithms they will accept for transactions.
+allowed = set(sm.get("allowed_signing", []))
+
+GOST_ALGORITHMS = {
+ "gost3410-2012-256-paramset-a",
+ "gost3410-2012-256-paramset-b",
+ "gost3410-2012-256-paramset-c",
+ "gost3410-2012-512-paramset-a",
+ "gost3410-2012-512-paramset-b",
+}
+
+# Local support is not enough; submit only when the node advertises support.
+supports_gost = bool(allowed & GOST_ALGORITHMS)
+supports_post_quantum = "ml-dsa" in allowed
+supports_sm2 = "sm2" in allowed and bool(sm.get("enabled", False))
+
+print(supports_gost, supports_post_quantum, supports_sm2)
+```
+
+If a node does not advertise the algorithm you need, use the key only for local
+or offline workflows. Do not submit transactions signed with that algorithm to
+that node. During the public Taira check, GOST and ML-DSA were available as SDK
+crypto helpers in the upstream Python library but were not advertised by the
+node for transaction signing.
+
+## Config-Aware Client Creation
+
+Use `resolve_torii_client_config` when your application reads node settings
+from a file but still needs environment- or test-specific overrides:
+
+```python
+import json
+from iroha_python import create_torii_client, resolve_torii_client_config
+
+with open("iroha_config.json", "r", encoding="utf-8") as handle:
+ raw_config = json.load(handle)
+
+# Override only the fields that vary by environment.
+resolved = resolve_torii_client_config(
+ config=raw_config,
+ overrides={"timeout_ms": 2_000, "max_retries": 5},
+)
+
+# Pass the resolved config into the same client constructor used elsewhere.
+client = create_torii_client(
+ raw_config.get("torii", {}).get("address", TORII_URL),
+ resolved_config=resolved,
+)
+```
+
+## Offline V2 Readiness
+
+The current Python SDK exposes Torii's Offline V2 readiness endpoint. It does
+not expose high-level offline allowance registration or renewal helpers.
+
+```python
+readiness = client.get_offline_v2_readiness()
+print(readiness.offline_note_v2)
+print(readiness.offline_one_use_keys)
+print(readiness.offline_fountain_qr_v1)
+```
+
+## Subscriptions
+
+Subscription helpers are mutating service calls inherited from the shared Torii
+client used by `iroha_python.ToriiClient`. Use IDs and assets that exist on the
+network you target.
+
+```python
+# The plan defines billing cadence, retry policy, and usage pricing.
+usage_plan = {
+ "provider": alice,
+ "billing": {
+ "cadence": {
+ "kind": "monthly_calendar",
+ "detail": {"anchor_day": 1, "anchor_time_ms": 0},
+ },
+ "bill_for": {"period": "previous_period", "value": None},
+ "retry_backoff_ms": 86_400_000,
+ "max_failures": 3,
+ "grace_ms": 604_800_000,
+ },
+ "pricing": {
+ "kind": "usage",
+ "detail": {
+ "unit_price": "0.024",
+ "unit_key": "compute_ms",
+ "asset_definition": "usd#wonderland",
+ },
+ },
+}
+
+# The provider signs plan creation.
+client.create_subscription_plan(
+ authority=alice,
+ private_key=alice_pair.private_key_hex,
+ plan_id="compute#wonderland",
+ plan=usage_plan,
+)
+
+# The subscriber signs subscription creation.
+client.create_subscription(
+ authority=bob,
+ private_key=bob_pair.private_key_hex,
+ subscription_id="sub-001",
+ plan_id="compute#wonderland",
+)
+
+# Usage is recorded by the provider and then charged on demand.
+client.record_subscription_usage(
+ "sub-001",
+ authority=alice,
+ private_key=alice_pair.private_key_hex,
+ unit_key="compute_ms",
+ delta="3600000",
+)
+client.charge_subscription_now(
+ "sub-001",
+ authority=alice,
+ private_key=alice_pair.private_key_hex,
+)
+```
+
+## Connect
+
+Build and parse Connect URIs, and read the public Connect status exposed by
+Taira:
+
+```python
+from iroha_python.connect import ConnectUri, build_connect_uri, parse_connect_uri
+
+# Connect URIs are what an app hands to a wallet to start a session.
+uri = build_connect_uri(
+ ConnectUri(
+ sid="base64url-session-id",
+ chain_id="taira",
+ node="taira.sora.org",
+ )
+)
+parsed = parse_connect_uri(uri)
+# Status tells you whether the node currently exposes Connect.
+status = client.get_connect_status_typed()
+
+assert parsed.chain_id == "taira"
+print(status.enabled, status.sessions_active)
+```
+
+Frame codecs, session key derivation, and session creation require the native
+extension and an enabled Connect session route:
+
+```python
+from iroha_python import (
+ ConnectControlClose,
+ ConnectControlOpen,
+ ConnectDirection,
+ ConnectFrame,
+ ConnectPermissions,
+ decode_connect_frame,
+ encode_connect_frame,
+ generate_connect_keypair,
+)
+
+# The app keypair is separate from the account key used for transactions.
+connect_pair = generate_connect_keypair()
+info = client.create_connect_session_info(
+ {"role": "app", "sid": connect_pair.public_key.hex()}
+)
+print(info.app_uri, info.wallet_token, info.expires_at)
+
+# Control frames negotiate permissions before encrypted messages are sent.
+frame = ConnectFrame(
+ sid=bytes.fromhex("01" * 32),
+ direction=ConnectDirection.APP_TO_WALLET,
+ sequence=1,
+ control=ConnectControlOpen(
+ app_public_key=connect_pair.public_key,
+ chain_id=CHAIN_ID,
+ permissions=ConnectPermissions(methods=["SIGN_REQUEST_TX"], events=[]),
+ ),
+)
+payload = encode_connect_frame(frame)
+assert decode_connect_frame(payload) == frame
+
+# Closing the control channel is explicit and carries a reason code.
+client.send_connect_control_frame(
+ "base64url-session-id",
+ ConnectControlClose(role="App", code=4100, reason="finished", retryable=False),
+)
+```
+
+Encrypt post-approval messages with a stateful session:
+
+```python
+from iroha_python import (
+ ConnectDirection,
+ ConnectSession,
+ ConnectSessionKeys,
+ ConnectSignRequestRawPayload,
+)
+
+# Derive symmetric session keys from both parties' keys and the session ID.
+keys = ConnectSessionKeys.derive(
+ local_private_key=bytes.fromhex("11" * 32),
+ peer_public_key=bytes.fromhex("22" * 32),
+ sid=bytes.fromhex("33" * 32),
+)
+session = ConnectSession(
+ sid=bytes.fromhex("33" * 32),
+ keys=keys,
+)
+# Encrypt application payloads after the session is approved.
+encrypted = session.encrypt_app_to_wallet(
+ ConnectSignRequestRawPayload(domain_tag="SIGN", payload=b"hash")
+)
+state = session.snapshot_state().to_dict()
+print(encrypted.sequence, state)
+```
+
+## Governance, Runtime, and Admin Surfaces
+
+These read-only calls returned successfully against public Taira:
+
+```python
+client = create_torii_client("https://taira.sora.org")
+
+# Governance reads return either current settings or typed not-found wrappers.
+protected = client.get_protected_namespaces()
+referendum = client.get_governance_referendum_typed("ref-1")
+tally = client.get_governance_tally_typed("ref-1")
+locks = client.get_governance_locks_typed("ref-1")
+unlock_stats = client.get_governance_unlock_stats_typed()
+
+print(protected, referendum.found)
+print(tally.approve, list(locks.locks), unlock_stats.expired_locks_now)
+
+# Runtime reads expose the active ABI and any pending upgrade records.
+abi = client.get_runtime_abi_active_typed()
+abi_hash = client.get_runtime_abi_hash_typed()
+runtime_metrics = client.get_runtime_metrics_typed()
+upgrades = client.list_runtime_upgrades_typed()
+capabilities = client.get_node_capabilities_typed()
+
+print(abi, abi_hash, runtime_metrics)
+print(upgrades.total, capabilities.abi_version)
+```
+
+Runtime upgrade helpers accept the manifest shape used by the runtime upgrade
+API. They are operator actions, so use them only against a node where your
+account and token are authorized:
+
+```python
+admin = create_torii_client(
+ TORII_URL,
+ auth_token="admin-token",
+api_token="torii-token",
+)
+
+# Propose creates the upgrade instructions; activation/cancel are operator actions.
+upgrade = admin.propose_runtime_upgrade(
+ {
+ "name": "Refresh runtime provenance",
+ "description": "Schedules a no-ABI-change runtime rollout.",
+ "abi_version": 1,
+ "abi_hash": "00" * 32,
+ "added_syscalls": [],
+ "added_pointer_types": [],
+ "start_height": 1_500_000,
+ "end_height": 1_500_256,
+ }
+)
+print(upgrade["tx_instructions"])
+
+admin.activate_runtime_upgrade("deadbeef" * 4)
+admin.cancel_runtime_upgrade("feedface" * 4)
+```
+
+## Status, Consensus, and Network Telemetry
+
+```python
+# `/status` is the public node snapshot endpoint on Taira.
+status = client.request_json("GET", "/status", expected_status=(200,))
+print(status["blocks"], status["txs_approved"])
+
+# Sumeragi and time endpoints expose consensus and clock diagnostics.
+sumeragi = client.get_sumeragi_status_typed()
+print(sumeragi.highest_qc.height, sumeragi.tx_queue.saturated)
+
+time_now = client.get_time_now_typed()
+time_status = client.get_time_status_typed()
+for sample in time_status.samples:
+ print(sample.peer, sample.last_offset_ms, sample.last_rtt_ms)
+print(time_now.now_ms)
+```
+
+## SoraFS, UAID, and Kaigi Helpers
+
+These helpers are available when the target node exposes the corresponding
+Nexus/SORA endpoints. Treat empty lists as a valid response: public Taira may
+have the route enabled without data for the sample manifest or UAID.
+
+```python
+# SoraFS status queries are reads scoped by manifest and status.
+por_status = client.get_sorafs_por_status(manifest_hex="ab" * 32, status="verified")
+print(len(por_status))
+
+# UAID helpers inspect wallet/data-space bindings for one identifier.
+uaid = "aabb" * 16
+bindings = client.get_uaid_bindings_typed(uaid)
+manifests = client.list_space_directory_manifests_typed(
+ uaid,
+ dataspace=11,
+ status="active",
+)
+print(len(bindings.dataspaces), len(manifests.manifests))
+
+# Kaigi health summarizes relay availability when the route is enabled.
+health = client.get_kaigi_relays_health_typed()
+print(health.healthy_total, health.failovers_total)
+```
+
+## Norito RPC and GPU Helpers
+
+Use `NoritoRpcClient` when you already have Norito bytes and need to call a
+binary Torii endpoint. The example requires a signed envelope from a previous
+transaction template:
+
+```python
+from iroha_python import NoritoRpcClient, NoritoRpcConfig
+
+# Use the binary RPC client for endpoints that expect Norito bytes.
+with NoritoRpcClient(NoritoRpcConfig(TORII_URL, timeout=5.0)) as rpc:
+ response_bytes = rpc.call("/v1/transaction", envelope.signed_transaction_versioned)
+ print(len(response_bytes))
+```
+
+CUDA helpers return `None` when the backend is not available, so applications
+can fall back to scalar implementations:
+
+```python
+from iroha_python import bn254_add_cuda, cuda_available, poseidon2_cuda
+
+# Always probe CUDA availability before calling optional GPU helpers.
+if cuda_available():
+ print(poseidon2_cuda(1, 2))
+ print(bn254_add_cuda((1, 0, 0, 0), (2, 0, 0, 0)))
+```
+
+## Current Coverage
+
+The Python SDK already includes helpers for:
+
+- Torii submission, status, query, and admin flows
+- typed instruction builders for common ISI and domain-specific extensions
+- transaction drafts, manifests, signing, and signed transaction envelope
+ workflows
+- streaming events, filters, and resumable cursors
+- Offline V2 readiness and Torii subscription helpers
+- account address, all-algorithm signing helpers, multihash round trips, SM2,
+ GOST, ML-DSA, BLS, and confidential key handling
+- Connect URIs, sessions, frames, encryption helpers, and registry admin
+- governance, runtime upgrade, Sumeragi, node-admin, SoraFS, UAID, and Kaigi
+ endpoint wrappers where the node exposes those features
+
+## Upstream References
+
+- `python/iroha_python/README.md`
+- `python/iroha_python/DESIGN.md`
+- `python/iroha_python/src/iroha_python`
+
+Those files track the current Python surface more accurately than the older
+Iroha 2-era examples that used the `iroha2` package name.
diff --git a/src/guide/tutorials/rust.md b/src/guide/tutorials/rust.md
new file mode 100644
index 000000000..b632aa9be
--- /dev/null
+++ b/src/guide/tutorials/rust.md
@@ -0,0 +1,81 @@
+# Rust
+
+The Rust implementation lives in the main workspace and remains the most direct
+way to work with the Iroha 3 codebase.
+
+## What You Get
+
+The upstream repository currently exposes:
+
+- the `iroha` Rust client crate
+- the `iroha` CLI as the most complete reference client
+- shared data model, crypto, and Norito crates used by the SDK layer
+
+## Recommended Starting Point
+
+For the current state of the project, start with the reference CLI and the
+workspace itself:
+
+```bash
+git clone --branch i23-features https://github.com/hyperledger-iroha/iroha.git
+cd iroha
+cargo build --workspace
+```
+
+Run the reference client with the checked-in default client config:
+
+```bash
+cargo run --bin iroha -- --config ./defaults/client.toml ledger domain list all
+```
+
+## Try Taira Read-Only
+
+From the same workspace checkout, try the public Taira diagnostics helper:
+
+```bash
+cargo run --bin iroha -- taira doctor \
+ --public-root https://taira.sora.org \
+ --json
+```
+
+For route-level checks, use Torii's JSON API directly:
+
+```bash
+curl -fsS https://taira.sora.org/status \
+ | jq '{blocks, txs_approved, txs_rejected, queue_size, peers}'
+
+curl -fsS 'https://taira.sora.org/v1/assets/definitions?limit=5' \
+ | jq -r '.items[] | [.id, .name, .total_quantity] | @tsv'
+```
+
+After you create `taira.client.toml`, the same binary can run signed canary
+commands against Taira. Keep those separate from ordinary unit tests because
+they require a faucet-funded account and live testnet availability.
+
+## Using the Rust Client Crate
+
+For the current source state, depend on the `i23-features` branch directly:
+
+```toml
+[dependencies]
+iroha = { git = "https://github.com/hyperledger-iroha/iroha.git", branch = "i23-features", package = "iroha" }
+```
+
+If you need the most complete examples of how the Rust surfaces are used in
+practice, inspect:
+
+- `crates/iroha_cli`
+- `crates/iroha/README.md`
+- `crates/iroha_cli/README.md`
+
+You can regenerate a local CLI help snapshot with:
+
+```bash
+cargo run -p iroha_cli --bin iroha -- tools markdown-help > crates/iroha_cli/CommandLineHelp.md
+```
+
+## Notes
+
+- The CLI currently provides better coverage than the standalone crate docs.
+- The workspace targets `std`; IVM/no-std builds are not the default path.
+- For operator-style flows, the CLI documentation is the most current source.
diff --git a/src/guide/tutorials/sample-apps.md b/src/guide/tutorials/sample-apps.md
new file mode 100644
index 000000000..3bf44cc60
--- /dev/null
+++ b/src/guide/tutorials/sample-apps.md
@@ -0,0 +1,59 @@
+# Sample Apps
+
+These repositories show complete client applications built around Iroha.
+Use them when you want to see SDK setup, account flows, signing, Torii
+calls, and UI integration in a larger codebase than the minimal tutorials.
+
+The sample apps are examples, not production wallet templates. Review their
+dependency versions, network assumptions, and key-storage choices before
+copying code into a real product.
+
+## Available Apps
+
+| App | Platform | What it demonstrates | Status |
+| ------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| [Iroha Demo JavaScript](https://github.com/soramitsu/iroha-demo-javascript) | Desktop app with Electron, Vue 3, Pinia, and Vite | Direct Torii connectivity through `@iroha/iroha-js`, local transaction signing, wallet balances and history, send/receive QR flows, staking, governance, explorer, and live E2E checks | Most complete current sample |
+| [Iroha Demo Android](https://github.com/soramitsu/iroha-demo-android) | Android point app | Native Android project structure for a point-transfer style mobile application | Older mobile demo; use the [Android, Kotlin, and Java SDK page](/guide/tutorials/kotlin-java.md) for current SDK setup |
+| [`examples/ios/ConnectMinimalApp`](https://github.com/hyperledger-iroha/iroha/tree/i23-features/examples/ios/ConnectMinimalApp) | SwiftPM executable | `NoritoNativeBridge` availability check, `ConnectSession` event stream intent, and diagnostics export intent | Iroha repository harness, but currently out of sync: the package path resolves to `examples/IrohaSwift`, and source references Connect helpers absent from `IrohaSwift/Sources` |
+| [`examples/ios/NoritoDemo`](https://github.com/hyperledger-iroha/iroha/tree/i23-features/examples/ios/NoritoDemo) | SwiftUI iOS template | XcodeGen template with conditional `NoritoBridge` linkage and Iroha Connect UI code | Iroha repository template, but the project manifest does not declare the `IrohaSwift` package dependency imported by the sources |
+| [`examples/ios/NoritoDemoXcode`](https://github.com/hyperledger-iroha/iroha/tree/i23-features/examples/ios/NoritoDemoXcode) | SwiftUI Xcode project | Generated Xcode project with Swift sources importing `IrohaSwift` and conditionally using `NoritoBridgeKit` | Iroha repository demo, but the checked-in Xcode project does not declare the `IrohaSwift` package dependency imported by the sources |
+| [Iroha Demo iOS](https://github.com/soramitsu/iroha-demo-ios) | iOS point app | Xcode/CocoaPods project structure for a point-transfer style mobile application | Historical external demo; use the in-tree Swift examples and [Swift and iOS SDK page](/guide/tutorials/swift.md) for current setup |
+
+## JavaScript Desktop Demo
+
+Start with the JavaScript demo if you want a working reference for current
+application flows. It is a desktop client that talks directly to Torii
+through the in-repo JavaScript SDK, without a separate backend. The app
+includes:
+
+- first-run account setup and key import or generation
+- endpoint settings for SORA Nexus networks
+- locally signed transfers submitted to Torii
+- wallet balances, transaction history, and explorer views
+- QR-based receive and send flows
+- staking and governance screens
+- live Electron E2E checks against configured Torii endpoints
+
+The JavaScript demo requires Node.js 20+ and a Rust toolchain for the
+native `iroha_js_host` module. Its README contains the current install,
+build, test, and live E2E commands.
+
+## Mobile Samples
+
+The external Android and iOS point-app repositories are historical examples
+of the original point-app concept and mobile project layout. Swift/iOS
+sample code also exists in the
+[Iroha repository's `examples/ios/` directory](https://github.com/hyperledger-iroha/iroha/tree/i23-features/examples/ios),
+but its checked-in project manifests are currently out of sync with the
+package API and dependency layout. The current Android SDK lives under
+`java/iroha_android/`.
+
+Use the SDK pages for new application setup:
+
+- [Android, Kotlin, and Java](/guide/tutorials/kotlin-java.md)
+- [Swift and iOS](/guide/tutorials/swift.md)
+
+For new mobile work, confirm the SDK version, Torii endpoint shape, account
+format, and transaction format against the current
+[Iroha `i23-features` branch](https://github.com/hyperledger-iroha/iroha/tree/i23-features)
+before porting code from either external mobile demo.
diff --git a/src/guide/tutorials/swift.md b/src/guide/tutorials/swift.md
new file mode 100644
index 000000000..2b22e04fd
--- /dev/null
+++ b/src/guide/tutorials/swift.md
@@ -0,0 +1,213 @@
+# Swift and iOS
+
+The Swift SDK shipped by the upstream workspace is the `IrohaSwift` Swift
+package under `IrohaSwift/`. Its package manifest defines one library product,
+`IrohaSwift`, and targets iOS 15+ and macOS 12+ with Swift tools 5.9.
+
+The package depends on the native `NoritoBridge` binary target. Package
+resolution validates `../dist/NoritoBridge.xcframework` before building, and
+transaction or Connect crypto paths throw bridge-unavailable errors when the
+native symbols are not loaded.
+
+## Swift Package Manager
+
+When developing against a checked-out workspace, point SwiftPM at the local
+`IrohaSwift/` package directory. The package identity used by
+`Package.swift` is `IrohaSwift`:
+
+```swift
+dependencies: [
+ .package(name: "IrohaSwift", path: "/path/to/iroha/IrohaSwift")
+],
+targets: [
+ .target(
+ name: "YourApp",
+ dependencies: [
+ .product(name: "IrohaSwift", package: "IrohaSwift")
+ ]
+ )
+]
+```
+
+Adjust the path for your app. Do not copy the current
+`examples/ios/ConnectMinimalApp` path as-is; that manifest resolves
+`../../IrohaSwift` to `examples/IrohaSwift`.
+
+Before resolving the package, make sure the bridge exists at the workspace root:
+
+```bash
+cd /path/to/iroha
+make bridge-xcframework
+```
+
+This produces `dist/NoritoBridge.xcframework`; `IrohaSwift/Package.swift`
+references it as `../dist/NoritoBridge.xcframework`.
+
+## CocoaPods
+
+The codebase also contains `IrohaSwift/IrohaSwift.podspec`. It declares the
+`IrohaSwift` pod, Swift 5.9, and iOS 15. The podspec pulls Swift sources from
+the main repository; the native bridge still has to be present and linked for
+transaction encoding, non-Ed25519 signing, and Connect crypto.
+
+## Quickstart
+
+```swift
+import Foundation
+import IrohaSwift
+
+let torii = ToriiClient(baseURL: URL(string: "http://127.0.0.1:8080")!)
+let sdk = IrohaSDK(toriiClient: torii)
+
+let keypair = try Keypair.generate()
+let accountId = try keypair.accountId()
+
+if #available(iOS 15.0, macOS 12.0, *) {
+ let balances = try await torii.getAssets(accountId: accountId)
+ print("balances:", balances)
+}
+```
+
+## Try Taira Read-Only
+
+Start with a plain HTTP probe to confirm the device or simulator can reach the
+public Taira endpoint:
+
+```swift
+import Foundation
+
+if #available(iOS 15.0, macOS 12.0, *) {
+ let url = URL(string: "https://taira.sora.org/status")!
+ let (data, response) = try await URLSession.shared.data(from: url)
+
+ if let http = response as? HTTPURLResponse {
+ print("status:", http.statusCode)
+ }
+ print(String(decoding: data, as: UTF8.self))
+}
+```
+
+Use the same `URLSession` check for
+`https://taira.sora.org/v1/assets/definitions?limit=5` while you are building
+UI and retry behavior. Switch to `IrohaSDK` submit helpers only after the
+app loads signer material from secure storage and the account is funded on
+Taira.
+
+To build and submit a transaction, use the `IrohaSDK` helpers. These call the
+native bridge-backed transaction encoder:
+
+```swift
+let transfer = TransferRequest(
+ chainId: "00000000-0000-0000-0000-000000000000",
+ authority: accountId,
+ assetDefinitionId: "66owaQmAQMuHxPzxUN3bqZ6FJfDa",
+ quantity: "1",
+ destination: accountId,
+ description: "demo"
+)
+
+if #available(iOS 15.0, macOS 12.0, *) {
+ let status = try await sdk.submitAndWait(
+ transfer: transfer,
+ keypair: keypair
+ )
+ print(status.content.status.kind)
+}
+```
+
+`TransferRequest`, `MintRequest`, `BurnRequest`, `ShieldRequest`, and
+`UnshieldRequest` validate canonical account IDs and canonical unprefixed
+Base58 asset-definition IDs before signing.
+
+## Signing
+
+`Keypair` is the Ed25519 convenience API. For other algorithms, construct an
+`IrohaSDK` with `defaultSigningAlgorithm` and use `generateSigningKey()` or
+`signingKey(fromSeed:)`:
+
+```swift
+let pqSdk = IrohaSDK(
+ baseURL: torii.baseURL,
+ defaultSigningAlgorithm: .mlDsa
+)
+let signingKey = try pqSdk.generateSigningKey()
+```
+
+The `SigningAlgorithm` enum currently includes Ed25519, secp256k1, BLS normal
+and small variants, ML-DSA, GOST R 34.10-2012 parameter sets, and SM2. Native
+bridge support is required outside the Ed25519 convenience path.
+
+## Connect
+
+The Connect client is implemented in Swift source, with crypto and frame codecs
+backed by `NoritoBridge`:
+
+```swift
+let sessionID = Data(repeating: 0, count: 32) // replace with the session bytes
+let sid = ""
+let request = try ConnectClient.makeWebSocketRequest(
+ baseURL: URL(string: "https://node.example")!,
+ sid: sid,
+ role: .app,
+ token: ""
+)
+
+let client = ConnectClient(request: request)
+await client.start()
+
+let session = ConnectSession(sessionID: sessionID, client: client)
+let keyPair = try ConnectCrypto.generateKeyPair()
+```
+
+`ConnectSession` handles open and close controls, encrypted envelope reads,
+direction keys, flow control, event streams, balance streams, and diagnostics
+journals.
+
+## Current Coverage
+
+The Swift source currently includes:
+
+- `ToriiClient` HTTP helpers for accounts, assets, aliases, explorer pages,
+ RWA, contracts, multisig, governance, subscriptions, data availability,
+ confidential assets, node/runtime status, health, metrics, and SSE streams
+- `IrohaSDK` transaction builders and submit/poll helpers for transfer, mint,
+ burn, shield, unshield, ZK transfer, ZK asset registration, metadata,
+ identifier claims, multisig registration, and governance instructions
+- pending transaction queue support through `PendingTransactionQueue` and
+ `FilePendingTransactionQueue`
+- account-address and I105 helpers through `AccountAddress` and `AccountId`
+- Ed25519, secp256k1, ML-DSA, BLS, GOST, and SM2 signing surfaces, with native
+ bridge support where required
+- Connect WebSocket, frame, crypto, session, queue, replay, and diagnostics
+ helpers
+- Offline V2 note, receipt, QR stream, and transaction models
+- SoraFS, data-availability, and proof-attachment helpers
+
+## In-Tree Samples
+
+The upstream workspace contains Swift/iOS example directories under
+`examples/ios/`, but the project manifests are not a reliable source of current
+setup instructions:
+
+- `examples/ios/ConnectMinimalApp` is a SwiftPM executable harness, but its
+ package manifest currently resolves `../../IrohaSwift` to
+ `examples/IrohaSwift`, and its source references Connect helpers that are not
+ present in `IrohaSwift/Sources`.
+- `examples/ios/NoritoDemo` and `examples/ios/NoritoDemoXcode` contain SwiftUI
+ code that imports `IrohaSwift` and conditionally uses `NoritoBridgeKit`, but
+ their checked-in project manifests do not declare the `IrohaSwift` package
+ dependency.
+
+Use `IrohaSwift/Sources/IrohaSwift` and `IrohaSwift/Tests/IrohaSwiftTests` as
+the current API references until those sample manifests are brought back in sync.
+
+## Source References
+
+- `IrohaSwift/Package.swift`
+- `IrohaSwift/IrohaSwift.podspec`
+- `IrohaSwift/Sources/IrohaSwift/ToriiClient.swift`
+- `IrohaSwift/Sources/IrohaSwift/TxBuilder.swift`
+- `IrohaSwift/Sources/IrohaSwift/TransactionEncoder.swift`
+- `IrohaSwift/Sources/IrohaSwift/ConnectClient.swift`
+- `IrohaSwift/Sources/IrohaSwift/ConnectSession.swift`
+- `examples/ios/ConnectMinimalApp/Package.swift`
diff --git a/src/help/configuration-issues.md b/src/help/configuration-issues.md
new file mode 100644
index 000000000..28e893e92
--- /dev/null
+++ b/src/help/configuration-issues.md
@@ -0,0 +1,99 @@
+# Troubleshooting Configuration Issues
+
+This section offers troubleshooting tips for issues with Iroha 2 and Iroha 3
+configuration. Make sure you
+[checked the keys](./overview.md#check-the-keys) first, as it is the most
+common source of issues in Iroha.
+
+If the issue you are experiencing is not described here, contact us via
+[Telegram](https://t.me/hyperledgeriroha).
+
+## Outdated genesis on a Docker Compose setup
+
+When you are using the Docker Compose version of Iroha, you might encounter
+the issue of one of the peer containers failing with the
+`Failed to deserialize raw genesis block` error. This usually means the peer,
+signed genesis transaction, and generated configuration were produced by
+different Iroha revisions or profiles.
+
+Check the failure with these steps:
+
+1. Use `docker ps` to check the current containers. Depending on the
+ generated profile, you will usually see `hyperledger/iroha:dev`
+ containers. The default Docker Compose profile contains four peer
+ containers, although your generated `docker-compose.yml` may differ.
+
+2. Check the logs and look for the
+ `Failed to deserialize raw genesis block` error. If you started your
+ Iroha in daemon mode with `docker compose up -d`, use
+ `docker compose logs` command.
+
+The way to troubleshoot such an issue depends on the use of Iroha. If this is a
+basic demo and you do not need to preserve peer data, regenerate a matching
+localnet or Docker Compose bundle with Kagami:
+
+```bash
+cargo run --bin kagami -- localnet --build-line iroha3 --peers 4 --out-dir ./localnet
+cargo run --bin kagami -- docker --peers 4 --config-dir ./localnet --image hyperledger/iroha:dev --out-file ./docker-compose.yml
+```
+
+Then remove the old container state and restart from the regenerated
+`genesis.signed.nrt`, peer `config.toml` files, and `client.toml`.
+
+If you need to restore the Iroha instance data, do the following:
+
+1. Connect the second Iroha peer that will copy the data from the first
+ (failed) peer.
+2. Wait for the new peer to synchronize the data with the first peer.
+3. Leave the new peer active.
+4. Update the genesis and configuration files of the first peer only as part of
+ a coordinated migration.
+
+::: info
+
+There is no general automatic rewrite path for replacing genesis on a live
+network. Treat this as a coordinated migration: preserve the old state, bring
+up compatible peers, and only move validators to the new configuration after
+the operators agree on the migration plan.
+
+:::
+
+## Multihash Format of Private and Public Keys
+
+If you look at the
+[client configuration](/guide/configure/client-configuration.md), you will
+notice that the keys there are given in
+[multi-hash format](https://github.com/multiformats/multihash).
+
+If you've never worked with multi-hash before, it is natural to assume that
+the right-hand-side is not a hexadecimal representation of the key bytes
+(two symbols per byte), but rather the bytes encoded as ASCII (or UTF-8),
+and call `from_hex` on the string literal in both the `public_key` and
+`private_key` instantiation.
+
+It is also natural to assume that calling `PrivateKey::try_from_str` on the
+string literal would yield only the correct key. So if you get the number
+of bits in the key wrong, e.g. 32 bytes vs 64, that it would raise an error
+message.
+
+**Both of these assumptions are wrong.** Unfortunately, the error messages
+don't help in de-bugging this particular kind of failure.
+
+**How to fix**: use `hex_literal`. This will also turn an ugly string of
+characters into a nice small table of obviously hexadecimal numbers.
+
+::: warning
+
+Even the `try_from_str` implementation cannot verify if a given string is a
+valid `PrivateKey` and warn you if it isn't.
+
+It will catch some obvious errors, e.g. if the string contains an invalid
+symbol. However, since we aim to support many key formats, it can't do much
+else. It cannot tell if the key is the _correct_ private key _for the given
+account_ either, unless you submit an instruction.
+
+:::
+
+These sorts of subtle mistakes can be avoided, for example, by
+deserialising directly from string literals, or by generating a fresh
+key-pair in places where it makes sense.
diff --git a/src/help/deployment-issues.md b/src/help/deployment-issues.md
new file mode 100644
index 000000000..00347c8f2
--- /dev/null
+++ b/src/help/deployment-issues.md
@@ -0,0 +1,90 @@
+# Troubleshooting Deployment Issues
+
+This section offers troubleshooting tips for issues with Iroha 2 and Iroha 3
+deployment. If the issue you are experiencing is not described here,
+contact us via [Telegram](https://t.me/hyperledgeriroha).
+
+## Start with generated artifacts
+
+For local and test deployments, prefer artifacts generated by Kagami instead
+of hand-written peer files:
+
+```bash
+cargo run --bin kagami -- localnet --build-line iroha3 --peers 4 --out-dir ./localnet
+```
+
+The generated directory contains peer configs, genesis material, start
+scripts, and a README for the selected build line. Use `--build-line iroha2`
+only when the deployment intentionally targets the Iroha 2 profile.
+
+## Peer does not start
+
+Check these items first:
+
+- `irohad --config ` points at the peer's own TOML file.
+- `public_key` and `private_key` in the peer config belong to the same key
+ pair.
+- `genesis.public_key` matches the key used to sign the genesis transaction.
+- validator peer identities use BLS-Normal keys, and `trusted_peers_pop`
+ contains proof-of-possession entries for the local key and trusted peers.
+- ports for Torii and P2P are not already bound by another process.
+- the Kura store directory belongs to the same chain and was not copied from a
+ different network profile.
+
+Use config tracing when the daemon reads more than one TOML layer:
+
+```bash
+cargo run --bin irohad -- --config ./config.toml --trace-config
+```
+
+## Docker and Compose
+
+Generate Compose from the current Kagami localnet output so the command-line
+arguments and config files match the checked-out code:
+
+```bash
+cargo run --bin kagami -- localnet --build-line iroha3 --peers 4 --out-dir ./localnet
+cargo run --bin kagami -- docker --peers 4 --config-dir ./localnet --image hyperledger/iroha:dev --out-file ./localnet/docker-compose.yml --force
+docker compose -f ./localnet/docker-compose.yml up
+```
+
+If a compose deployment starts and then stalls, inspect the daemon logs for:
+
+- mismatched `chain`
+- one peer using a different genesis transaction or manifest
+- advertised P2P addresses that only work inside the container network
+- local volume reuse after regenerating genesis
+
+When testing a fresh genesis, remove the old Kura volumes before restarting
+the stack. Keeping old block storage with a new genesis will make replay fail.
+
+## Kubernetes
+
+For Kubernetes, treat each validator as stateful infrastructure:
+
+- give each peer a stable identity key and stable persistent volume
+- expose P2P addresses that other peers can resolve from inside the cluster
+- mount config and genesis files as immutable config for a rollout
+- roll out all genesis or topology changes deliberately, not as an automatic
+ config-map refresh
+
+If a pod restarts repeatedly, compare the rendered config in the pod with the
+expected [`peer.template.toml`](/reference/peer-config/index.md#template) and
+check whether the peer is replaying old Kura data.
+
+## Sora profile
+
+Iroha 3 deployments that use Nexus, SoraFS, or multi-lane flows should start
+the daemon with the Sora profile enabled:
+
+```bash
+cargo run --bin irohad -- --config ./config.toml --sora
+```
+
+or:
+
+```bash
+IROHA_SORA_PROFILE=true cargo run --bin irohad -- --config ./config.toml
+```
+
+Use the same profile consistently across validators in the same network.
diff --git a/src/help/index.md b/src/help/index.md
new file mode 100644
index 000000000..4c08020f2
--- /dev/null
+++ b/src/help/index.md
@@ -0,0 +1,7 @@
+# Receive support
+
+From time to time, you may have questions about Iroha that you would like to discuss in detail with others. There are three ways to quickly get in touch with our community: Telegram, Discord, and GitHub.
+
+A large part of the community currently uses [Telegram](https://t.me/hyperledgeriroha) for communication. The Hyperledger part of the team prefers [Discord](https://discord.gg/hyperledger), with two dedicated channels: `iroha` and `iroha-2-contributors`. The Discord and Telegram channels are synchronized, so users of both media see your messages.
+
+Finally, you can [create a GitHub issue](https://github.com/hyperledger-iroha/iroha/issues/new/choose), whether it's a request to update documentation, a suggestion for the core team, or a bug you have found.
diff --git a/src/help/installation-issues.md b/src/help/installation-issues.md
new file mode 100644
index 000000000..e01670b81
--- /dev/null
+++ b/src/help/installation-issues.md
@@ -0,0 +1,179 @@
+# Troubleshooting Installation Issues
+
+This section offers troubleshooting tips for issues with Iroha 2 and Iroha 3
+installation. If the issue you are experiencing is not described here,
+contact us via [Telegram](https://t.me/hyperledgeriroha).
+
+## Quick checks
+
+Most installation failures come from one of four places:
+
+- a Rust toolchain older than the version pinned by the upstream workspace
+- `cargo` or `rustc` resolving to a different installation than `rustup`
+- missing system build tools such as a C compiler, `pkg-config`, or CMake
+- stale generated snippets or local build artifacts after switching between
+ Iroha branches
+
+From the Iroha source checkout, start with:
+
+```bash
+rustup show
+cargo --version
+rustc --version
+cargo metadata --no-deps
+```
+
+If `cargo metadata` fails, fix the local toolchain before running docs
+commands such as `pnpm get-snippets`, because the docs may invoke Kagami to
+generate the current data-model schema.
+
+## Troubleshooting Rust Toolchain
+
+Sometimes, things don’t go as planned. Especially if you had `rust` on your
+system a while ago, but didn’t upgrade. A similar problem can occur in
+Python: XKCD has a famous example of what that might look like:
+
+
+
+
+
+
+
+### Check Rust version
+
+In the interest of preserving both your and our sanity, make sure that you
+have the right version of `cargo` paired with the right version of `rustc`.
+The current upstream workspace declares `rust-version = "1.92"` and pins the
+toolchain channel in `rust-toolchain.toml`. To show the versions, do
+
+```bash
+$ cargo -V
+$ cargo 1.93.1 (...)
+```
+
+and then
+
+```bash
+$ rustc --version
+$ rustc 1.93.1 (...)
+```
+
+If you have higher versions, you're fine. If you have lower versions, you
+can run the following command to update it:
+
+```bash
+$ rustup toolchain update stable
+```
+
+### Check installation location
+
+If you get lower version numbers **and** you updated the toolchain and it
+didn’t work… let’s just say it’s a common problem, but it doesn’t have a
+common solution.
+
+Firstly, you should establish where the version that you want to use is
+installed:
+
+```bash
+$ rustup which rustc
+$ rustup which cargo
+```
+
+User installations of the toolchains are _usually_ in
+`~/.rustup/toolchains/stable-*/bin/`. If that is the case, you should be
+able to run
+
+```bash
+$ rustup toolchain update stable
+```
+
+and that should fix your problems.
+
+### Check the default Rust version
+
+Another option is that you have the up-to-date `stable` toolchain, but it
+is not set as the default. Run:
+
+```bash
+$ rustup default stable
+```
+
+This can happen if you installed a `nightly` version, or set a specific
+Rust version, but forgot to un-set it.
+
+### Check if there are other Rust versions
+
+Continuing down the troubleshooting rabbit-hole, we could have shell
+aliases:
+
+```bash
+$ type rustc
+$ type cargo
+```
+
+If these point to locations other than the one you saw when running
+`rustup which *`, then you have a problem. Note that it’s not enough to
+just
+
+```bash
+$ alias rustc "~/.rustup/toolchains/stable-*/bin/rustc"
+$ alias cargo "~/.rustup/toolchains/stable-*/bin/cargo"
+```
+
+because there is an internal logic that could break, regardless of how you
+re-arrange your shell aliases.
+
+The simplest solution would be to remove the versions that you don’t use.
+
+It’s easier _said_ than _done_, however, since it entails tracking all the
+versions of rustup installed and available to you. Usually, there are only
+two: the system package manager version and the one that got installed into
+the standard location in your home folder when you ran the command in the
+beginning of this tutorial. For the former, consult your (Linux)
+distribution’s manual, (`apt remove rust`). For the latter, run:
+
+```bash
+$ rustup toolchain list
+```
+
+And then, for every `` (without the angle brackets of course):
+
+```bash
+$ rustup remove
+```
+
+After that, make sure that
+
+```bash
+$ cargo --help
+```
+
+results in a command-not-found error, i.e. that you have no active Rust
+toolchain installed. Then, run:
+
+```bash
+$ rustup toolchain install stable
+```
+
+## Troubleshooting Python toolchain
+
+When you install the Python Wheel package using pip during [Python client setup](/guide/tutorials/python.md), you may encounter an error like:
+"iroha_python-*.whl is not a supported wheel on this platform".
+
+This error means that pip is outdated, so you need to update it.
+First of all, it is recommended to check your OS for updates and perform a system upgrade.
+
+If this doesn't work, you can try updating `pip` for your user directory.
+
+`python -m pip install --upgrade pip`
+
+Make sure that `pip` that is installed in your home directory. To do this, run `whereis pip` and check if `/home/username/.local/bin/pip` is among the paths. If not, update your shell's `PATH` variable.
+
+If the issue persists, please [contact us](/help/) and report the outputs.
+
+```
+python --version
+python3 --version
+pip --version
+pip3 --version
+```
diff --git a/src/help/integration-issues.md b/src/help/integration-issues.md
new file mode 100644
index 000000000..c96549a82
--- /dev/null
+++ b/src/help/integration-issues.md
@@ -0,0 +1,84 @@
+# Troubleshooting Integration Issues
+
+This section offers troubleshooting tips for issues with Iroha 2 and Iroha 3
+integration. If the issue you are experiencing is not described here,
+contact us via [Telegram](https://t.me/hyperledgeriroha).
+
+## Client cannot connect
+
+Check that the client config points to the peer's Torii address:
+
+```toml
+torii_url = "http://127.0.0.1:8080/"
+```
+
+For CLI checks, pass the same file explicitly:
+
+```bash
+cargo run --bin iroha -- --config ./localnet/client.toml ledger domain list all
+```
+
+If the peer runs in Docker or Kubernetes, use the host or service address that
+is reachable from the client process. `127.0.0.1` inside a container is not
+the host machine.
+
+For public Taira tests, start with an unsigned endpoint probe:
+
+```bash
+curl -fsS https://taira.sora.org/status \
+ | jq '{blocks, txs_approved, txs_rejected, queue_size, peers}'
+
+curl -fsS 'https://taira.sora.org/v1/domains?limit=5' \
+ | jq -r '.items[].id'
+```
+
+If these commands fail with `502`, TLS, DNS, or timeout errors, fix network
+reachability or wait for the public testnet endpoint before debugging account
+keys or transaction payloads.
+
+## Transactions are rejected
+
+Most transaction failures are caused by identity or authorization mismatch:
+
+- the account public key in the client config does not match the private key
+ used for signing
+- the account is not registered in genesis or by a prior transaction
+- the account lacks the permission token or role required by the runtime
+ validator
+- object IDs use old Iroha 2-era forms instead of current canonical forms
+ such as `domain.dataspace`
+
+Use `--output-format text` while debugging CLI commands so errors are easier
+to read:
+
+```bash
+cargo run --bin iroha -- --config ./localnet/client.toml --output-format text ledger transaction ping --msg "hello"
+```
+
+## Queries return empty results
+
+Empty query results do not always mean the query failed. Check:
+
+- the transaction that should create the object was committed
+- the queried domain, asset definition, or account ID is canonical
+- pagination or filters are not excluding the expected row
+- the client is connected to the intended network, not another localnet
+
+For domain checks, start with the broadest query:
+
+```bash
+cargo run --bin iroha -- --config ./localnet/client.toml ledger domain list all
+```
+
+## Event or block streams stop early
+
+Block and event stream examples rely on Torii streaming endpoints. Verify the
+peer is still running, then test with a timeout:
+
+```bash
+cargo run --bin iroha -- --config ./localnet/client.toml ledger blocks 1 --timeout 30s
+cargo run --bin iroha -- --config ./localnet/client.toml ledger events block
+```
+
+For HTTP integrations, compare your endpoint paths with the current
+[Torii endpoint reference](/reference/torii-endpoints.md).
diff --git a/src/help/overview.md b/src/help/overview.md
new file mode 100644
index 000000000..1d570cad9
--- /dev/null
+++ b/src/help/overview.md
@@ -0,0 +1,32 @@
+# Troubleshooting
+
+This section is intended to help if you encounter issues while working with
+Iroha. If something goes wrong, please [check the keys](#check-the-keys)
+first. If that doesn't help, check the troubleshooting instructions for
+each stage:
+
+- [Installation issues](./installation-issues.md)
+- [Configuration issues](./configuration-issues.md)
+- [Deployment issues](./deployment-issues.md)
+- [Integration issues](./integration-issues.md)
+
+If the issue you are experiencing is not described here, contact us via
+[Telegram](https://t.me/hyperledgeriroha).
+
+## Check the keys
+
+Most issues arise as a result of unmatched keys. This is why we recommend
+to follow this rule: **If something goes wrong, check the keys
+first**.
+
+Here's a quick explanation: It is not possible to differentiate the error
+messages that arise when peers' keys do not match the keys in the array of
+trusted peers because it would expose the peers' public key. As such, if you
+have Helm charts or Kubernetes deployments with keys defined via environment
+variables, compare the configured
+[`public_key`](/reference/peer-config/params.md#param-public-key),
+[`private_key`](/reference/peer-config/params.md#param-private-key), and
+[`trusted_peers`](/reference/peer-config/params.md#param-trusted-peers)
+values before investigating higher-level failures.
+
+If in doubt, [generate a new pair of keys](/guide/security/generating-cryptographic-keys.md).
diff --git a/src/img/KeePassXC.png b/src/img/KeePassXC.png
new file mode 100644
index 000000000..be329b6dc
Binary files /dev/null and b/src/img/KeePassXC.png differ
diff --git a/src/img/appendix_running-iroha_cli-output.png b/src/img/appendix_running-iroha_cli-output.png
new file mode 100644
index 000000000..36ef3d37d
Binary files /dev/null and b/src/img/appendix_running-iroha_cli-output.png differ
diff --git a/src/img/data_filters.png b/src/img/data_filters.png
new file mode 100644
index 000000000..55bf54ecd
Binary files /dev/null and b/src/img/data_filters.png differ
diff --git a/src/img/ffi.png b/src/img/ffi.png
new file mode 100644
index 000000000..36e9b5388
Binary files /dev/null and b/src/img/ffi.png differ
diff --git a/src/img/install-troubles.png b/src/img/install-troubles.png
new file mode 100644
index 000000000..031335ad5
Binary files /dev/null and b/src/img/install-troubles.png differ
diff --git a/src/img/iroha_java_commits.png b/src/img/iroha_java_commits.png
new file mode 100644
index 000000000..33fe2c869
Binary files /dev/null and b/src/img/iroha_java_commits.png differ
diff --git a/src/img/iroha_java_hash.png b/src/img/iroha_java_hash.png
new file mode 100644
index 000000000..c82346c18
Binary files /dev/null and b/src/img/iroha_java_hash.png differ
diff --git a/src/img/keepassxc_pk_agent.png b/src/img/keepassxc_pk_agent.png
new file mode 100644
index 000000000..fdd07dadf
Binary files /dev/null and b/src/img/keepassxc_pk_agent.png differ
diff --git a/src/img/keepassxc_private_key.png b/src/img/keepassxc_private_key.png
new file mode 100644
index 000000000..297f94409
Binary files /dev/null and b/src/img/keepassxc_private_key.png differ
diff --git a/src/img/keepassxc_ssh_agent.png b/src/img/keepassxc_ssh_agent.png
new file mode 100644
index 000000000..9ae656270
Binary files /dev/null and b/src/img/keepassxc_ssh_agent.png differ
diff --git a/src/img/sample-vue-app.gif b/src/img/sample-vue-app.gif
new file mode 100644
index 000000000..05578e52a
Binary files /dev/null and b/src/img/sample-vue-app.gif differ
diff --git a/src/img/yubikey_up.jpg b/src/img/yubikey_up.jpg
new file mode 100644
index 000000000..cc0d5057b
Binary files /dev/null and b/src/img/yubikey_up.jpg differ
diff --git a/src/index.md b/src/index.md
index b1a3f7eb1..27ab87e65 100644
--- a/src/index.md
+++ b/src/index.md
@@ -1,3 +1,68 @@
-# Hey
+---
+layout: home
-Yep
+hero:
+ name: Hyperledger Iroha 3
+ text: Documentation
+ tagline:
+ Deterministic blockchain platform for SORA Nexus, SDKs, and operator
+ workflows
+ image:
+ src: /icon.svg
+ alt: Hyperledger Iroha 3 logo
+ #actions:
+ #- theme: alt
+ # text: View on GitHub
+ # link: https://github.com/hyperledger-iroha/iroha/tree/i23-features
+
+features:
+ - icon:
+ dark: /start.svg
+ light: /start-light.svg
+ title: Get Started
+ details:
+ Build the current workspace, launch a local network, and start using
+ the Iroha 3 CLI
+ link: /get-started/
+ - icon:
+ dark: /build.svg
+ light: /build-light.svg
+ title: Guide
+ details:
+ Find SDKs, best practices, configuration, security, and operator
+ workflows
+ link: /guide/
+ - icon:
+ dark: /explained.svg
+ light: /explained-light.svg
+ title: Architecture
+ details:
+ Understand Torii, Sumeragi, Norito, IVM, and the Nexus data-space
+ model
+ link: /blockchain/iroha-explained
+ - icon:
+ dark: /reference.svg
+ light: /reference-light.svg
+ title: Reference
+ details:
+ Consult the current binary, genesis, Torii, and compatibility
+ reference pages
+ link: /reference/
+ # - title: Cookbook # (TBA)
+
+footer: true
+---
+
+
+
+
+Hyperledger Iroha is part of LF Decentralized Trust . Learn more at iroha.tech .
diff --git a/src/public/apple-touch-icon.png b/src/public/apple-touch-icon.png
new file mode 100644
index 000000000..63b5d4f8c
Binary files /dev/null and b/src/public/apple-touch-icon.png differ
diff --git a/src/public/build-light.svg b/src/public/build-light.svg
new file mode 100644
index 000000000..bc0a0b195
--- /dev/null
+++ b/src/public/build-light.svg
@@ -0,0 +1,3 @@
+
+
+
diff --git a/src/public/build.svg b/src/public/build.svg
new file mode 100644
index 000000000..1600aa0e1
--- /dev/null
+++ b/src/public/build.svg
@@ -0,0 +1,3 @@
+
+
+
diff --git a/src/public/compat-matrix.json b/src/public/compat-matrix.json
new file mode 100644
index 000000000..51431510a
--- /dev/null
+++ b/src/public/compat-matrix.json
@@ -0,0 +1,347 @@
+{
+ "source": {
+ "repo": "hyperledger-iroha/iroha",
+ "repo_url": "https://github.com/hyperledger-iroha/iroha",
+ "branch": "i23-features",
+ "branch_url": "https://github.com/hyperledger-iroha/iroha/tree/i23-features",
+ "commit": "b530168a7468",
+ "generated_at": "2026-04-28"
+ },
+ "included_sdks": [
+ {
+ "name": "Rust"
+ },
+ {
+ "name": "Python"
+ },
+ {
+ "name": "JavaScript / TypeScript"
+ },
+ {
+ "name": "Kotlin / Android"
+ },
+ {
+ "name": "Java / Android"
+ },
+ {
+ "name": "Swift"
+ },
+ {
+ "name": "C# / .NET"
+ }
+ ],
+ "stories": [
+ {
+ "name": "Account address and I105 rendering",
+ "results": [
+ {
+ "status": "ok"
+ },
+ {
+ "status": "ok"
+ },
+ {
+ "status": "ok"
+ },
+ {
+ "status": "ok"
+ },
+ {
+ "status": "ok"
+ },
+ {
+ "status": "ok"
+ },
+ {
+ "status": "ok"
+ }
+ ]
+ },
+ {
+ "name": "Canonical Torii request signing",
+ "results": [
+ {
+ "status": "ok"
+ },
+ {
+ "status": "ok"
+ },
+ {
+ "status": "ok"
+ },
+ {
+ "status": "ok"
+ },
+ {
+ "status": "ok"
+ },
+ {
+ "status": "ok"
+ },
+ {
+ "status": "ok"
+ }
+ ]
+ },
+ {
+ "name": "Norito payload encoding and fixture parity",
+ "results": [
+ {
+ "status": "ok"
+ },
+ {
+ "status": "ok"
+ },
+ {
+ "status": "ok"
+ },
+ {
+ "status": "ok"
+ },
+ {
+ "status": "ok"
+ },
+ {
+ "status": "ok"
+ },
+ {
+ "status": "ok"
+ }
+ ]
+ },
+ {
+ "name": "Torii read, query, and RPC clients",
+ "results": [
+ {
+ "status": "ok"
+ },
+ {
+ "status": "ok"
+ },
+ {
+ "status": "ok"
+ },
+ {
+ "status": "ok"
+ },
+ {
+ "status": "ok"
+ },
+ {
+ "status": "ok"
+ },
+ {
+ "status": "ok"
+ }
+ ]
+ },
+ {
+ "name": "Transaction build, sign, submit, and status polling",
+ "results": [
+ {
+ "status": "ok"
+ },
+ {
+ "status": "ok"
+ },
+ {
+ "status": "ok"
+ },
+ {
+ "status": "ok"
+ },
+ {
+ "status": "ok"
+ },
+ {
+ "status": "ok"
+ },
+ {
+ "status": "ok"
+ }
+ ]
+ },
+ {
+ "name": "Core ledger flow builders",
+ "results": [
+ {
+ "status": "ok"
+ },
+ {
+ "status": "ok"
+ },
+ {
+ "status": "ok"
+ },
+ {
+ "status": "ok"
+ },
+ {
+ "status": "ok"
+ },
+ {
+ "status": "ok"
+ },
+ {
+ "status": "ok"
+ }
+ ]
+ },
+ {
+ "name": "Event, SSE, and subscription streams",
+ "results": [
+ {
+ "status": "ok"
+ },
+ {
+ "status": "ok"
+ },
+ {
+ "status": "ok"
+ },
+ {
+ "status": "ok"
+ },
+ {
+ "status": "ok"
+ },
+ {
+ "status": "ok"
+ },
+ {
+ "status": "ok"
+ }
+ ]
+ },
+ {
+ "name": "Multisig and policy TTL helpers",
+ "results": [
+ {
+ "status": "ok"
+ },
+ {
+ "status": "ok"
+ },
+ {
+ "status": "ok"
+ },
+ {
+ "status": "ok"
+ },
+ {
+ "status": "ok"
+ },
+ {
+ "status": "ok"
+ },
+ {
+ "status": "ok"
+ }
+ ]
+ },
+ {
+ "name": "Offline V2 and wallet readiness",
+ "results": [
+ {
+ "status": "ok"
+ },
+ {
+ "status": "ok"
+ },
+ {
+ "status": "ok"
+ },
+ {
+ "status": "ok"
+ },
+ {
+ "status": "ok"
+ },
+ {
+ "status": "ok"
+ },
+ {
+ "status": "ok"
+ }
+ ]
+ },
+ {
+ "name": "SoraFS and gateway helpers",
+ "results": [
+ {
+ "status": "ok"
+ },
+ {
+ "status": "ok"
+ },
+ {
+ "status": "ok"
+ },
+ {
+ "status": "ok"
+ },
+ {
+ "status": "ok"
+ },
+ {
+ "status": "ok"
+ },
+ {
+ "status": "ok"
+ }
+ ]
+ },
+ {
+ "name": "RWA/NFT explorer and instruction helpers",
+ "results": [
+ {
+ "status": "ok"
+ },
+ {
+ "status": "ok"
+ },
+ {
+ "status": "ok"
+ },
+ {
+ "status": "ok"
+ },
+ {
+ "status": "ok"
+ },
+ {
+ "status": "ok"
+ },
+ {
+ "status": "ok"
+ }
+ ]
+ },
+ {
+ "name": "Live testnet smoke lane",
+ "results": [
+ {
+ "status": "ok"
+ },
+ {
+ "status": "ok"
+ },
+ {
+ "status": "ok"
+ },
+ {
+ "status": "ok"
+ },
+ {
+ "status": "ok"
+ },
+ {
+ "status": "ok"
+ },
+ {
+ "status": "ok"
+ }
+ ]
+ }
+ ]
+}
diff --git a/src/public/explained-light.svg b/src/public/explained-light.svg
new file mode 100644
index 000000000..7bf1d9ac3
--- /dev/null
+++ b/src/public/explained-light.svg
@@ -0,0 +1,3 @@
+
+
+
diff --git a/src/public/explained.svg b/src/public/explained.svg
new file mode 100644
index 000000000..c1a110800
--- /dev/null
+++ b/src/public/explained.svg
@@ -0,0 +1,3 @@
+
+
+
diff --git a/src/public/favicon.ico b/src/public/favicon.ico
new file mode 100644
index 000000000..b14a82960
Binary files /dev/null and b/src/public/favicon.ico differ
diff --git a/src/public/icon-192.png b/src/public/icon-192.png
new file mode 100644
index 000000000..914b582c7
Binary files /dev/null and b/src/public/icon-192.png differ
diff --git a/src/public/icon-512.png b/src/public/icon-512.png
new file mode 100644
index 000000000..9763292b7
Binary files /dev/null and b/src/public/icon-512.png differ
diff --git a/src/public/icon.svg b/src/public/icon.svg
new file mode 100644
index 000000000..56115da8a
--- /dev/null
+++ b/src/public/icon.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/src/public/manifest.webmanifest b/src/public/manifest.webmanifest
new file mode 100644
index 000000000..cb86e18be
--- /dev/null
+++ b/src/public/manifest.webmanifest
@@ -0,0 +1,9 @@
+{
+ "name": "Iroha 3 Documentation",
+ "start_url": "https://hyperledger.github.io/iroha-2-docs/",
+ "display": "standalone",
+ "icons": [
+ { "src": "icon-192.png", "type": "image/png", "sizes": "192x192" },
+ { "src": "icon-512.png", "type": "image/png", "sizes": "512x512" }
+ ]
+}
diff --git a/src/public/reference-light.svg b/src/public/reference-light.svg
new file mode 100644
index 000000000..722fe5c2c
--- /dev/null
+++ b/src/public/reference-light.svg
@@ -0,0 +1,3 @@
+
+
+
diff --git a/src/public/reference.svg b/src/public/reference.svg
new file mode 100644
index 000000000..38407ca62
--- /dev/null
+++ b/src/public/reference.svg
@@ -0,0 +1,3 @@
+
+
+
diff --git a/src/public/start-light.svg b/src/public/start-light.svg
new file mode 100644
index 000000000..f3bb28b63
--- /dev/null
+++ b/src/public/start-light.svg
@@ -0,0 +1,3 @@
+
+
+
diff --git a/src/public/start.svg b/src/public/start.svg
new file mode 100644
index 000000000..8f93861e8
--- /dev/null
+++ b/src/public/start.svg
@@ -0,0 +1,3 @@
+
+
+
diff --git a/src/reference/binaries.md b/src/reference/binaries.md
new file mode 100644
index 000000000..bc8ce2948
--- /dev/null
+++ b/src/reference/binaries.md
@@ -0,0 +1,81 @@
+# Working with Iroha Binaries
+
+The current Iroha 2 and Iroha 3 operator workflow revolves around three
+primary binaries:
+
+- [`irohad`](https://github.com/hyperledger-iroha/iroha/tree/i23-features/crates/irohad) for running a peer daemon
+- [`iroha`](https://github.com/hyperledger-iroha/iroha/tree/i23-features/crates/iroha_cli) for CLI and operator commands
+- [`kagami`](https://github.com/hyperledger-iroha/iroha/tree/i23-features/crates/iroha_kagami) for keys, genesis, localnets, and profiles
+
+The source tree also exposes track-specific aliases:
+
+- `iroha2` and `iroha3` for CLI flows
+- `iroha2d` and `iroha3d` for daemon startup
+
+Use those aliases when scripts need to make the selected build line explicit.
+Use `iroha` and `irohad` for general examples and shared automation.
+
+## Build from Source
+
+From the upstream workspace root:
+
+```bash
+cargo build --release -p irohad -p iroha_cli -p iroha_kagami
+```
+
+The release binaries are then available in `target/release/`.
+
+To inspect the command surface:
+
+```bash
+./target/release/irohad --help
+./target/release/iroha --help
+./target/release/kagami --help
+./target/release/iroha3 --help
+./target/release/iroha3d --help
+```
+
+## Run Directly from the Repository
+
+If you do not want to install anything globally, use `cargo run`:
+
+```bash
+cargo run --bin irohad -- --help
+cargo run --bin iroha -- --help
+cargo run --bin kagami -- --help
+cargo run --bin iroha3 -- --help
+cargo run --bin iroha3d -- --help
+```
+
+## Docker Image
+
+The upstream workspace uses `kagami localnet` and `kagami docker` to generate
+Docker Compose files that match the checked-out code. The `hyperledger/iroha:dev`
+image can be used with those generated files.
+
+Run the CLI in a container:
+
+```bash
+docker run -t hyperledger/iroha:dev iroha --help
+```
+
+Run Kagami in a container:
+
+```bash
+docker run -t hyperledger/iroha:dev kagami --help
+```
+
+For peer startup, generate a localnet and Compose file first:
+
+```bash
+cargo run --bin kagami -- localnet --build-line iroha3 --peers 4 --out-dir ./localnet
+cargo run --bin kagami -- docker --peers 4 --config-dir ./localnet --image hyperledger/iroha:dev --out-file ./localnet/docker-compose.yml --force
+docker compose -f ./localnet/docker-compose.yml up
+```
+
+## Which Binary Should I Use?
+
+- Use `irohad` when you are starting or operating peers.
+- Use `iroha` when you need to query the ledger, submit transactions, or inspect operator endpoints.
+- Use `kagami` when you need keys, genesis manifests, profile bundles, or localnet assets.
+- Use `iroha2`/`iroha2d` or `iroha3`/`iroha3d` when a script must pin the Iroha track.
diff --git a/src/reference/compatibility-matrix.md b/src/reference/compatibility-matrix.md
new file mode 100644
index 000000000..aabfc7e4c
--- /dev/null
+++ b/src/reference/compatibility-matrix.md
@@ -0,0 +1,19 @@
+# Compatibility Matrix
+
+The compatibility matrix shows cross-SDK scenario coverage for the current
+Iroha 3 docs set. By default, the page loads the bundled snapshot for the
+[`hyperledger-iroha/iroha` `i23-features` branch](https://github.com/hyperledger-iroha/iroha/tree/i23-features).
+
+The matrix consists of:
+
+- **Stories** in the first column
+- **SDKs** across the remaining columns
+- **Status symbols** for covered, failed, and missing data
+
+
+
+::: info
+Set `VITE_COMPAT_MATRIX_URL` only to override the bundled snapshot with a
+compatible live backend. Without that variable, the page loads
+`src/public/compat-matrix.json`.
+:::
diff --git a/src/reference/data-model-schema.md b/src/reference/data-model-schema.md
new file mode 100644
index 000000000..c4688ee4d
--- /dev/null
+++ b/src/reference/data-model-schema.md
@@ -0,0 +1,11 @@
+# Data Model Schema
+
+This page is generated from the
+[`hyperledger-iroha/iroha` `i23-features` branch](https://github.com/hyperledger-iroha/iroha/tree/i23-features).
+`pnpm get-snippets` reads
+[`docs/source/references/schema.json`](https://github.com/hyperledger-iroha/iroha/blob/i23-features/docs/source/references/schema.json)
+from that source when it is available. If that snapshot is empty, regenerate
+this page from the same branch with the snippet tooling after the upstream
+schema generator succeeds.
+
+
diff --git a/src/reference/ffi.md b/src/reference/ffi.md
new file mode 100644
index 000000000..1b91678a9
--- /dev/null
+++ b/src/reference/ffi.md
@@ -0,0 +1,123 @@
+# Foreign Function Interfaces (FFI)
+
+The `iroha_ffi` crate provides macros and traits for generating C ABI
+bindings from Rust APIs. It is used where Iroha types need to cross an FFI
+boundary, for example by SDK bindings or host integrations.
+
+## Why FFI
+
+A function is a rather abstract entity, and while most languages agree on
+what a function should do, the way in which functions are represented is
+very different. Moreover, in some languages, such as Rust, the consequences
+of calling a function and the things that it is allowed to do are also
+different. When Rust APIs need to be called from another language or a
+different host environment, Iroha uses a foreign function interface (FFI)
+to level the playing field.
+
+The main standard used today is the C application binary interface. It's
+simple, widely available, and stable. In principle, you could do
+everything manually, but Iroha provides the `iroha_ffi` crate to generate
+FFI-compliant functions out of an existing Rust API.
+
+You can, of course, do this your way. The `iroha_ffi` crate merely
+generates the code that you would need to generate anyway. Writing the
+necessary boilerplate requires quite a bit of diligence and discipline.
+Every function call over the FFI boundary is `unsafe` with a potential to
+cause undefined behaviour. The method by which we managed to solve it,
+revolves around using **robust** `repr(C)` types.
+
+::: info
+
+The only exception are pointers. The null check and the validity cannot be
+enforced globally, so raw pointers (as always) are only used in exceptional
+cases. Given that we provide wrappers around almost every instance of an
+object in the Iroha data model, you shouldn't have to use raw pointers at
+all.
+
+:::
+
+## Example
+
+Here is an example of generating a binding:
+
+```rust
+#[derive(FfiType)]
+struct DaysSinceEquinox(u32);
+
+#[ffi_export]
+impl DaysSinceEquinox {
+ pub fn update_value(&mut self, a: &u8) {
+ self.0 = *a as u32;
+ }
+}
+```
+
+The example above will generate the following binding with
+`DaysSinceEquinox` represented as an opaque pointer:
+
+```rust
+pub extern fn DaysSinceEquinox__update_value(handle: *mut DaysSinceEquinox, a: *const u8) -> FfiReturn {
+ // function implementation
+}
+```
+
+## FFI Binding Generation
+
+The `iroha_ffi` crate is used to generate functions that are callable via
+FFI. Given `Rust` structs and methods, they generate the `unsafe` code that
+you would need in order to cross the linking boundary.
+
+A Rust type is converted into a robust `repr(C)` type that can cross the
+FFI boundary with `FfiType::into_ffi`. This goes the other way around as
+well: FFI `ReprC` type is converted into a `Rust` type via
+`FfiType::try_from_ffi`.
+
+::: warning
+
+Note that the opposite conversion is fallible and can cause undefined
+behaviour. While we can make the best effort to avoid the most obvious
+mistakes, you must ensure the program's correctness on your end.
+
+:::
+
+The main traits that enable binding generation are `ReprC`, `FfiType`, and
+`FfiConvert`.
+
+| Trait | Description |
+| ------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| `ReprC` | This trait represents a robust type that conforms to C ABI. The type can be safely shared across FFI boundaries. |
+| `FfiType` | This trait defines a corresponding `ReprC` type for a given `Rust` type. The defined `ReprC` type is used in place of the `Rust` type in the API of the generated FFI function. |
+| `FfiConvert` | This trait defines two methods `into_ffi` and `try_from_ffi` that are used to perform the conversion of the `Rust` type to or from `ReprC` type. |
+
+Note that there is no ownership transfer over FFI except for opaque pointer
+types. All other types that carry ownership, such as `Vec`, are cloned.
+
+### Name Mangling
+
+Note the use of double underscores in generated names of FFI objects:
+
+- For the `inherent_fn` method defined on the `StructName` struct, the FFI
+ name would be `StructName__inherent_fn`.
+- For the `MethodName` method from the `TraitName` trait in the
+ `StructName` struct, the FFI name would be
+ `StructName__TraitName__MethodName`.
+- To set the `field_name` field in the `StructName` struct, the FFI
+ function name would be `StructName__set_field_name`.
+- To get the `field_name` field in the `StructName` struct, the FFI
+ function name would be `StructName__field_name`.
+- To get the mutable `field_name` field in the `StructName` struct, the FFI
+ function name would be `StrucuName__field_name_mut`.
+- For the freestanding `module_name::fn_name`, the FFI name would be
+ `module_name::__fn_name`.
+- For the traits that are not generic and allow sharing their
+ implementation in the FFI (see `Clone` below), the FFI name would be
+ `module_name::__clone`.
+
+ ```rust
+ impl Clone for Type1 {
+ fn clone(&self) -> Self;
+ }
+ impl Clone for Type2 {
+ fn clone(&self) -> Self;
+ }
+ ```
diff --git a/src/reference/genesis.md b/src/reference/genesis.md
new file mode 100644
index 000000000..e8b778920
--- /dev/null
+++ b/src/reference/genesis.md
@@ -0,0 +1,79 @@
+# Genesis Reference
+
+In the current Iroha 3 workflow, a `genesis.json` manifest describes the first
+transactions and parameters that will be applied when the network starts.
+
+The signed artifact distributed to peers is a Norito-encoded `.nrt` file
+produced by `kagami genesis sign`.
+
+## Main Fields
+
+A genesis manifest can define:
+
+- `chain` for the chain identifier
+- `executor` for an optional executor upgrade bytecode path
+- `ivm_dir` for IVM libraries used by triggers and upgrades
+- `consensus_mode` for the initial mode advertised by the manifest
+- `transactions` for ordered parameter updates, instructions, triggers, and topology
+- `crypto` for the initial crypto snapshot
+
+Within `transactions`, topology entries pair peer ids and PoPs together:
+
+```json
+{
+ "peer": "ea0130...",
+ "pop_hex": "0xabcd..."
+}
+```
+
+## Generate a Manifest
+
+Use Kagami to generate a template:
+
+```bash
+cargo run -p iroha_kagami -- genesis generate \
+ --consensus-mode npos \
+ --ivm-dir defaults \
+ --genesis-public-key > genesis.json
+```
+
+For the public SORA Nexus dataspace, `npos` is the expected consensus mode.
+Other Iroha 3 deployments may use permissioned or NPoS depending on the target
+profile.
+
+## Sign the Manifest
+
+After editing and validating the JSON, sign it into a deployable `.nrt` block:
+
+```bash
+cargo run -p iroha_kagami -- genesis sign genesis.json \
+ --private-key \
+ --out-file genesis.signed.nrt
+```
+
+`kagami genesis sign` reads the genesis public key from the manifest and uses
+the supplied private key, seed, and algorithm to produce the deployable signed
+block. The result is the file that peers should reference from their config.
+
+## Configure `irohad`
+
+Point the daemon at the signed genesis block:
+
+```toml
+[genesis]
+file = "genesis.signed.nrt"
+public_key = ""
+```
+
+## Related Tools
+
+- `kagami genesis validate`
+- `kagami genesis normalize`
+- `kagami genesis embed-pop`
+- `kagami localnet`
+- `cargo xtask kagami-profiles`
+
+For the full upstream details, see:
+
+- [docs/genesis.md](https://github.com/hyperledger-iroha/iroha/blob/i23-features/docs/genesis.md)
+- [crates/iroha_kagami/README.md](https://github.com/hyperledger-iroha/iroha/blob/i23-features/crates/iroha_kagami/README.md)
diff --git a/src/reference/glossary.md b/src/reference/glossary.md
new file mode 100644
index 000000000..06d847eae
--- /dev/null
+++ b/src/reference/glossary.md
@@ -0,0 +1,221 @@
+# Glossary
+
+Here you can find definitions of all Iroha-related entities.
+
+- [Peer](#peer)
+- [Asset](#asset)
+- [Byzantine fault-tolerance (BFT)](#byzantine-fault-tolerance-bft)
+- [Iroha Components](#iroha-components)
+ - [Sumeragi (Emperor)](#sumeragi-emperor)
+ - [Torii (Gate)](#torii-gate)
+ - [Kura (Warehouse)](#kura-warehouse)
+ - [Kagami(Teacher and Exemplar and/or looking glass)](#kagami-teacher-and-exemplar-and-or-looking-glass)
+ - [Merkle tree (hash tree)](#merkle-tree-hash-tree)
+ - [Smart contracts](#smart-contracts)
+ - [Triggers](#triggers)
+ - [Versioning](#versioning)
+ - [Hijiri (peer reputation system)](#hijiri-peer-reputation-system)
+- [Iroha Modules](#iroha-modules)
+- [Iroha Special Instructions (ISI)](#iroha-special-instructions-isi)
+ - [Utility Iroha Special Instructions](#utility-iroha-special-instructions)
+ - [Core Iroha Special Instructions](#core-iroha-special-instructions)
+ - [Domain-specific Iroha Special Instructions](#domain-specific-iroha-special-instructions)
+ - [Custom Iroha Special Instruction](#custom-iroha-special-instruction)
+- [Iroha Query](#iroha-query)
+- [View change](#view-change)
+- [World state view (WSV)](#world-state-view-wsv)
+- [Leader](#leader)
+
+## Blockchain ledgers
+
+Blockchain ledgers are digital record-keeping systems that use blockchain
+technology to keep financial records. These are named after old-fashioned
+books that were used for financial records such as prices, news, and
+transaction information.
+
+During medieval times, ledger books were open for public viewing and
+accuracy verification. This idea is reflected in the blockchain-based
+systems that can check the stored data for validity.
+
+## Peer
+
+A peer in Iroha means an Iroha process instance to which other Iroha processes
+and client applications can connect.
+A single machine can host several Iroha peers.
+Peers are equal regarding their resources and capabilities,
+with an important exception: only one of the peers runs
+the genesis block at the bootstrapping stage of the Iroha network.
+
+Other blockchains may refer to the same concept as a node or a validator.
+
+A peer can be a process on its host system.
+It also can be contained in a Docker container and a Kubernetes pod.
+
+## Asset
+
+In the context of blockchains, an asset is the representation of a valuable
+object on the blockchain.
+
+Additional information on assets is available
+[here](/blockchain/assets.md).
+
+### Fungible assets
+
+Such assets can be easily swapped for other assets of the same type because
+they are interchangeable.
+
+As an example, all units of the same currency are equal in their value and
+can be used to purchase goods. Typically, fungible assets are identical in
+appearance, aside from the wear of banknotes and coins.
+
+### Non-fungible assets
+
+Non-fungible assets are unique and valuable due to their specific
+characteristics and rarity; their value cannot be compared to other assets.
+
+- A painting's value can vary based on the artist, the time period it was
+ painted, and the public's interest in it.
+- Two houses on the same street may have differing levels of maintenance.
+- Jewellery manufacturers typically offer a range of different designs.
+
+### Mintable assets
+
+An asset is mintable if more of the same type can be issued.
+
+### Non-mintable assets
+
+If the initial amount of an asset is specified once and doesn't change, it
+is considered non-mintable.
+
+The [Genesis block](/guide/configure/genesis.md) sets this information for
+the Iroha configuration.
+
+## Byzantine fault-tolerance (BFT)
+
+The property of being able to properly function with a network containing a
+certain percentage of malicious actors. Iroha is capable of functioning
+with up to 33% malicious actors in its peer-to-peer network.
+
+## Iroha Components
+
+Rust modules containing Iroha functionality.
+
+### Sumeragi (Emperor)
+
+The Iroha module responsible for consensus.
+
+### Torii (Gate)
+
+Module with the incoming request handling logic for the [peer](#peer). It is used to
+receive, accept and route incoming instructions, and HTTP queries, as well
+as run-time configuration updates.
+
+### Kura (Warehouse)
+
+Persistent block storage. Kura stores signed blocks, block hashes, height
+indexes, recovery sidecars, and commit-roster metadata on disk. The
+[World State View](#world-state-view-wsv) is rebuilt from Kura blocks when a
+state snapshot is unavailable or behind the local block store. See
+[Kura storage](/blockchain/world.md#kura-storage).
+
+### Kagami(Teacher and Exemplar and/or looking glass)
+
+Generator for commonly used data. It can generate cryptographic key pairs,
+genesis blocks, documentation, etc.
+
+### Merkle tree (hash tree)
+
+A data structure used to validate and verify the state at each block
+height. Iroha's current implementation is a binary tree. See
+[Wikipedia](https://en.wikipedia.org/wiki/Merkle_tree) for more details.
+
+### Smart contracts
+
+Smart contracts are blockchain-based programs that run when a specific set
+of conditions is met. In Iroha smart contracts are implemented using
+[core Iroha special instructions](#core-iroha-special-instructions).
+
+### Triggers
+
+An event type that allows invoking an Iroha special instruction at specific
+block commit, time (with some caveats), etc. More on triggers
+[here](/blockchain/triggers.md).
+
+### Versioning
+
+Each request is labelled with the API version to which it belongs. It
+allows a combination of different binary versions of Iroha client/peer
+software to interoperate, which in turn allows software upgrades in the
+Iroha network.
+
+### Hijiri (peer reputation system)
+
+Iroha's reputation system. It allows prioritising communication with [peers](#peer)
+that have a good track-record, and reducing the harm that can be caused by
+malicious [peers](#peer).
+
+## Iroha Modules
+
+Third party extensions to Iroha that provide custom functionality.
+
+## Iroha Special Instructions (ISI)
+
+A library of smart contracts provided with Iroha. These can be invoked via
+either transactions or registered event listeners. More on ISI
+[here](/blockchain/instructions.md).
+
+#### Utility Iroha Special Instructions
+
+This set of [isi](#iroha-special-instructions-isi) contains logical
+instructions like `If`, I/O related like `Notify` and compositions like
+`Sequence`. They are mostly used as
+[custom instructions](#custom-iroha-special-instruction).
+
+### Core Iroha Special Instructions
+
+[Special instructions](#iroha-special-instructions-isi) provided with every
+Iroha deployment. These include some
+[domain-specific](#domain-specific-iroha-special-instructions) as well as
+[utility instructions](#utility-iroha-special-instructions).
+
+### Domain-specific Iroha Special Instructions
+
+Instructions related to domain-specific activities: assets, accounts,
+domains, peer management). These provide the tools necessary to make
+changes to the [World State View](#world-state-view-wsv) in a secure and
+safe manner.
+
+### Custom Iroha Special Instruction
+
+Instructions provided in [Iroha Modules](#iroha-modules), by clients or 3rd
+parties. These can only be built using
+[the Core Instructions](#core-iroha-special-instructions). Forking and
+modifying the Iroha source code is not recommended, as special instructions
+not agreed-upon by [peers](#peer) in an Iroha deployment will be treated as faults,
+thus [peers](#peer) running a modified instance will have their access revoked.
+
+## Iroha Query
+
+A request to read the World State View without modifying said view. More on
+queries [here](/blockchain/queries.md).
+
+## View change
+
+A process that takes place in case of a failed attempt at consensus.
+Usually this entails the election of a new [Leader](#leader).
+
+## World state view (WSV)
+
+In-memory representation of the current blockchain state. The WSV contains
+the `World`, committed block hashes, transaction indexes, consensus topology,
+and derived indexes used by queries. It is updated only through committed
+blocks and can be reconstructed from [Kura](#kura-warehouse). See
+[World State View](/blockchain/world.md#world-state-view-wsv).
+
+## Leader
+
+In an iroha network, a peer is selected randomly and granted the special
+privilege of forming the next block. This privilege can be revoked in
+networks that achieve
+[Byzantine fault-torelance](#byzantine-fault-tolerance-bft) via
+[view change](#view-change).
diff --git a/src/reference/index.md b/src/reference/index.md
new file mode 100644
index 000000000..65147ffbd
--- /dev/null
+++ b/src/reference/index.md
@@ -0,0 +1,18 @@
+# Reference
+
+This section tracks the current operator-facing reference material for the
+Iroha 3 docs set.
+
+Start here for:
+
+- [Working with Iroha binaries](/reference/binaries.md)
+- [Genesis reference](/reference/genesis.md)
+- [Torii endpoints](/reference/torii-endpoints.md)
+- [Torii API console](/reference/torii-api-console.md)
+- [Norito](/reference/norito.md)
+- [Compatibility matrix](/reference/compatibility-matrix.md)
+
+For the broader upstream documentation map, see:
+
+- [docs/README.md](https://github.com/hyperledger-iroha/iroha/blob/i23-features/docs/README.md)
+- [docs/source/README.md](https://github.com/hyperledger-iroha/iroha/blob/i23-features/docs/source/README.md)
diff --git a/src/reference/instructions.md b/src/reference/instructions.md
new file mode 100644
index 000000000..c7798f300
--- /dev/null
+++ b/src/reference/instructions.md
@@ -0,0 +1,79 @@
+# Iroha Special Instructions
+
+The current data model exposes these built-in instruction families:
+
+| Instruction | Variants |
+| --- | --- |
+| [`RegisterBox`](/blockchain/instructions.md#un-register) | `Domain`, `Account`, `AssetDefinition`, `Nft`, `Role`, `Trigger`, `RegisterPeerWithPop` |
+| [`UnregisterBox`](/blockchain/instructions.md#un-register) | `Peer`, `Domain`, `Account`, `AssetDefinition`, `Nft`, `Role`, `Trigger` |
+| [`MintBox`](/blockchain/instructions.md#mint-burn) | numeric `Asset`, trigger repetitions |
+| [`BurnBox`](/blockchain/instructions.md#mint-burn) | numeric `Asset`, trigger repetitions |
+| [`TransferBox`](/blockchain/instructions.md#transfer) | `Domain`, `AssetDefinition`, numeric `Asset`, `Nft` |
+| [`SetKeyValueBox`](/blockchain/instructions.md#setkeyvalue-removekeyvalue) | `Domain`, `Account`, `AssetDefinition`, `Nft`, `Trigger` metadata |
+| [`RemoveKeyValueBox`](/blockchain/instructions.md#setkeyvalue-removekeyvalue) | `Domain`, `Account`, `AssetDefinition`, `Nft`, `Trigger` metadata |
+| [`GrantBox`](/blockchain/instructions.md#grant-revoke) | permission to account, role to account, permission to role |
+| [`RevokeBox`](/blockchain/instructions.md#grant-revoke) | permission from account, role from account, permission from role |
+| [`SetParameter`](/blockchain/instructions.md#setparameter) | chain parameter update |
+| [`ExecuteTrigger`](/blockchain/instructions.md#executetrigger) | trigger execution |
+| [`Upgrade`](/blockchain/instructions.md#other-instructions) | executor upgrade |
+| [`Log`](/blockchain/instructions.md#other-instructions) | executor log entry |
+| [`CustomInstruction`](/blockchain/instructions.md#other-instructions) | executor-specific JSON payload |
+
+Additional Iroha 3 modules may register domain-specific instruction types
+through the instruction registry. For the schema-level list generated from the
+current source tree, see [Data Model Schema](./data-model-schema.md).
+
+::: details Diagram: Core Instruction Families
+
+```mermaid
+classDiagram
+direction LR
+
+class InstructionBox {
+ RegisterBox
+ UnregisterBox
+ MintBox
+ BurnBox
+ TransferBox
+ SetKeyValueBox
+ RemoveKeyValueBox
+ GrantBox
+ RevokeBox
+ SetParameter
+ ExecuteTrigger
+ Upgrade
+ Log
+ CustomInstruction
+}
+
+class RegisterBox {
+ Domain
+ Account
+ AssetDefinition
+ Nft
+ Role
+ Trigger
+ RegisterPeerWithPop
+}
+
+class TransferBox {
+ Domain
+ AssetDefinition
+ Asset
+ Nft
+}
+
+class MetadataBoxes {
+ Domain
+ Account
+ AssetDefinition
+ Nft
+ Trigger
+}
+
+InstructionBox --> RegisterBox
+InstructionBox --> TransferBox
+InstructionBox --> MetadataBoxes
+```
+
+:::
diff --git a/src/reference/irohad-cli.md b/src/reference/irohad-cli.md
new file mode 100644
index 000000000..2a5e1781c
--- /dev/null
+++ b/src/reference/irohad-cli.md
@@ -0,0 +1,142 @@
+# `irohad` CLI
+
+`irohad` starts an Iroha peer daemon. The same crate also builds
+track-specific daemon aliases:
+
+- `iroha3d` for the Iroha 3 build line
+- `iroha2d` for the Iroha 2 build line
+
+Use `irohad` or `iroha3d` for current Iroha 3 examples unless a deployment
+script intentionally pins the Iroha 2 profile.
+
+```shell
+irohad --config path/to/config.toml
+```
+
+## `--config` {#arg-config}
+
+- **Type:** File Path
+- **Alias:** `-c`
+
+Path to the [configuration](/reference/peer-config/index.md) file.
+
+```shell
+irohad --config path/to/iroha.toml
+```
+
+## `--genesis-manifest-json` {#arg-genesis-manifest-json}
+
+- **Type:** File Path
+
+Optional path to a genesis manifest JSON file. Use this when the deployment
+validates startup against a manifest generated by Kagami.
+
+```shell
+irohad --config path/to/iroha.toml --genesis-manifest-json path/to/genesis.manifest.json
+```
+
+## `--trace-config` {#arg-trace-config}
+
+Enables trace logs of configuration reading and parsing. Might be useful for configuration troubleshooting.
+
+- **Type:** flag
+- **ENV:** `TRACE_CONFIG`
+
+```shell
+irohad --trace-config
+```
+
+## `--terminal-colors` {#arg-terminal-colors}
+
+- **Type:** Boolean, either `--terminal-colors=false` or
+ `--terminal-colors=true`
+- **Default:** auto-detect terminal support
+- **ENV:** `TERMINAL_COLORS`
+
+Whether to enable ANSI-colored output or not.
+
+By default, Iroha determines whether the terminal supports colored output
+or not.
+
+To explicitly disable colours:
+
+```shell
+irohad --terminal-colors=false
+
+# or via env
+
+export TERMINAL_COLORS=false
+irohad
+```
+
+## `--language` {#arg-language}
+
+- **Type:** String
+
+Override the system language used for daemon messages.
+
+```shell
+irohad --language en-US
+```
+
+## `--sora` {#arg-sora}
+
+- **Type:** flag
+- **ENV:** `IROHA_SORA_PROFILE`
+
+Enable the Sora Nexus feature profile for SoraFS, the SoraNet handshake, and
+multi-lane consensus flows.
+
+```shell
+irohad --config path/to/iroha.toml --sora
+```
+
+## `--fastpq-execution-mode` {#arg-fastpq-execution-mode}
+
+- **Type:** `auto`, `cpu`, or `gpu`
+
+Override FASTPQ prover execution mode.
+
+```shell
+irohad --fastpq-execution-mode auto
+```
+
+## `--fastpq-poseidon-mode` {#arg-fastpq-poseidon-mode}
+
+- **Type:** `auto`, `cpu`, or `gpu`
+
+Override FASTPQ Poseidon pipeline mode.
+
+```shell
+irohad --fastpq-poseidon-mode cpu
+```
+
+## `--fastpq-device-class` {#arg-fastpq-device-class}
+
+- **Type:** String
+
+Override the FASTPQ telemetry device-class label.
+
+```shell
+irohad --fastpq-device-class apple-m4
+```
+
+## `--fastpq-chip-family` {#arg-fastpq-chip-family}
+
+- **Type:** String
+
+Override the FASTPQ telemetry chip-family label.
+
+```shell
+irohad --fastpq-chip-family m4
+```
+
+## `--fastpq-gpu-kind` {#arg-fastpq-gpu-kind}
+
+- **Type:** String
+
+Override the FASTPQ telemetry GPU-kind label.
+
+```shell
+irohad --fastpq-gpu-kind integrated
+```
diff --git a/src/reference/naming.md b/src/reference/naming.md
new file mode 100644
index 000000000..f7dc85502
--- /dev/null
+++ b/src/reference/naming.md
@@ -0,0 +1,40 @@
+# Naming Conventions
+
+When you are naming accounts, domains, or assets, you have to keep in mind
+the following conventions used in Iroha:
+
+1. There is a number of reserved separators that are used for specific
+ types of constructs:
+
+ - `@` is reserved for account aliases and scoped account/public-key forms
+ - `#` is reserved for asset definition aliases and asset balance literals
+ - `::` is reserved for contract aliases
+ - `.` is reserved for domain and dataspace qualification
+ - `$` is reserved for trigger-scoped textual forms
+ - `%` is reserved for validator-scoped textual forms
+
+2. The maximum number of characters (including UTF-8 characters) a name can
+ have is limited by two factors: `[0, u32::MAX]` and the currently
+ allocated stack space.
+
+## Try It on Taira
+
+Resolve a public asset alias into its canonical asset definition ID:
+
+```bash
+curl -fsS https://taira.sora.org/v1/assets/aliases/resolve \
+ -H 'content-type: application/json' \
+ -d '{"alias":"usd#wonderland"}' \
+ | jq '{alias, asset_definition_id, asset_name, status: .alias_binding.status}'
+```
+
+Compare that with the asset definition list:
+
+```bash
+curl -fsS 'https://taira.sora.org/v1/assets/definitions?limit=20' \
+ | jq -r '.items[] | select(.alias != null) | [.alias, .id, .name] | @tsv'
+```
+
+The `#` character separates an asset alias from the domain context. Keep it out
+of plain names unless you are intentionally writing an asset alias or asset
+balance literal.
diff --git a/src/reference/norito.md b/src/reference/norito.md
new file mode 100644
index 000000000..4f569a97a
--- /dev/null
+++ b/src/reference/norito.md
@@ -0,0 +1,258 @@
+# Norito
+
+Norito is Iroha's canonical serialization layer. It is the byte format used
+when peers, SDKs, CLI tools, Torii, Kura, and generated artifacts need to agree
+on exactly the same payload.
+
+Use Norito when the data is part of consensus, signing, hashing, persistence,
+or cross-SDK interoperability. Use JSON when an endpoint explicitly offers a
+human-readable projection for operators, dashboards, or quick debugging.
+
+## Where Norito Appears
+
+| Surface | How Norito is used |
+| --- | --- |
+| Transactions and queries | Signed transaction and query payloads submitted through Torii are encoded as Norito. |
+| Genesis | `kagami genesis sign` produces a signed `.nrt` block that peers load at startup. |
+| Torii typed responses | Endpoints that support typed binary responses use `Accept: application/x-norito`. |
+| SDKs | Rust, Python, JavaScript, Kotlin/Java, Swift, and Android clients use Norito builders or bindings instead of hand-assembled bytes. |
+| Kura storage | Block payloads, recovery sidecars, rosters, and commit markers are stored as Norito-framed data. |
+| Manifests | Nexus, data availability, SoraFS, streaming, and app-facing manifests use Norito when the manifest must be signed or hashed. |
+| Streaming | Norito Streaming uses Norito manifests, segment headers, control frames, and conformance fixtures. |
+
+Norito is not a smart-contract language. It is the deterministic envelope and
+codec that carries transactions, contract calls, manifests, and typed API
+payloads.
+
+## Payload Model
+
+Every on-wire or on-disk Norito payload is framed by a header followed by the
+encoded payload bytes. Headerless, or bare, payloads are reserved for internal
+hashing, benchmarks, and helper APIs that immediately wrap the result in a
+header before transport.
+
+| Header field | Size | Purpose |
+| --- | ---: | --- |
+| Magic | 4 bytes | ASCII `NRT0`, used to reject non-Norito data early. |
+| Major | 1 byte | Format major version. Current payloads use `0`. |
+| Minor | 1 byte | Fixed v1 decode hint. Current payloads use `0x00`; layout choices live in flags. |
+| Schema hash | 16 bytes | Type identity used by typed decoders to reject unexpected payloads. |
+| Compression | 1 byte | `0 = None`, `1 = Zstd`. Unknown values are rejected. |
+| Payload length | 8 bytes | Uncompressed payload length as little-endian `u64`. |
+| CRC64 | 8 bytes | CRC64-XZ checksum of the uncompressed payload. |
+| Flags | 1 byte | Layout flags for compact lengths, packed sequences, and packed structs. |
+
+The header is 40 bytes. Decoders validate the magic, version, supported flag
+mask, payload length, checksum, and schema hash before reconstructing the
+typed value.
+
+## Layout Flags
+
+Norito stores layout choices in the final header byte. The default v1 helpers
+emit `COMPACT_LEN` (`0x02`) for compact per-value length prefixes. Legacy
+fixed-width length prefixes remain readable when callers explicitly encode
+with `flags = 0x00`.
+
+| Flag | Hex | Status | Effect |
+| --- | ---: | --- | --- |
+| `PACKED_SEQ` | `0x01` | Supported | Encodes variable-sized collections with an offset table plus a contiguous data block. |
+| `COMPACT_LEN` | `0x02` | Default | Uses canonical unsigned varints for per-value length prefixes. |
+| `PACKED_STRUCT` | `0x04` | Supported | Encodes derive-generated structs as packed field payloads. |
+| `VARINT_OFFSETS` | `0x08` | Reserved | Rejected in v1; packed-sequence offsets are fixed-width `u64`. |
+| `COMPACT_SEQ_LEN` | `0x10` | Reserved | Rejected in v1; top-level sequence length headers are fixed-width `u64`. |
+| `FIELD_BITSET` | `0x20` | Supported with requirements | Adds a bitset for packed structs so only fields that need explicit sizes carry size prefixes. Requires `PACKED_STRUCT` and `COMPACT_LEN`. |
+
+The flags are explicit. Decoders do not infer layout from payload shape,
+version minor, or heuristics. Unknown or invalid combinations are rejected so
+that all peers interpret a payload the same way.
+
+## Encoding Rules
+
+Norito uses deterministic layouts for the common data shapes that appear in
+the Iroha data model:
+
+- Strings are `[len][utf8-bytes]`; `len` follows `COMPACT_LEN` when enabled.
+- Per-value lengths use compact varints when `COMPACT_LEN` is set, otherwise
+ fixed 8-byte little-endian `u64`.
+- Sequence length headers are fixed 8-byte little-endian `u64` in v1.
+- `Vec` is encoded as `[len_u64][raw-bytes]` instead of one length per byte.
+- Packed sequences use `(len + 1)` monotonic `u64` offsets followed by the
+ concatenated element payloads.
+- Maps encode entry counts with fixed `u64` and use deterministic key order.
+ `HashMap` entries are sorted by key before encoding; `BTreeMap` uses its
+ natural order.
+- `BigInt` uses little-endian two's-complement bytes with a `u32` byte length
+ and a 512-bit cap.
+- `Numeric` is encoded as `(mantissa, scale)`, where the mantissa stores the
+ integer value and scale stores the number of fractional digits.
+
+These rules matter for signatures and hashes. Two SDKs that build the same
+logical transaction must produce the same canonical bytes.
+
+## Schema Hashes
+
+Typed Norito payloads carry a 16-byte schema hash in the header. The default
+hash is derived from the fully qualified type name. Builds that enable
+structural schema hashing derive the hash from the canonical schema instead.
+
+Typed decoders reject schema mismatches. This protects clients from accidentally
+decoding a valid Norito frame as the wrong type and is the usual failure mode
+when an SDK fixture bundle drifts from the node data model.
+
+## Compression and Acceleration
+
+Norito supports explicit and adaptive compression without changing the logical
+payload:
+
+| Feature | Purpose |
+| --- | --- |
+| `to_bytes` | Encode an uncompressed header-framed payload. |
+| `to_compressed_bytes` | Encode with Zstd and record the compression tag in the header. |
+| `to_bytes_auto` | Apply deterministic heuristics to decide whether compression is worthwhile. |
+| CRC64 acceleration | Uses portable CRC64-XZ everywhere, with CLMUL on x86_64 or PMULL on aarch64 when available. |
+| GPU CRC64 and compression | Optional Metal or CUDA helpers may accelerate large payloads, then fall back to CPU paths. |
+
+Hardware acceleration never changes the decoded content. CRC and JSON
+accelerators must match portable output bit-for-bit. Zstd frame bytes may
+differ between CPU and GPU encoders, but the decoded payload and Norito header
+metadata remain deterministic for validation.
+
+## JSON Support
+
+Norito includes a native JSON stack for endpoints and tooling that need JSON
+without leaving the Norito type system.
+
+| JSON feature | Use case |
+| --- | --- |
+| `norito::json::{to_json, from_json}` | Deterministic typed JSON encode/decode. |
+| Pretty and writer helpers | CLI output, fixtures, and streaming `std::io` integration. |
+| DOM values | Programmatic manipulation through Norito's JSON value model. |
+| Fast typed JSON | Structural-tape based decode/encode for hot DTO paths. |
+| Zero-copy reader | Token scanning that borrows strings from the input where possible. |
+| Stage-1 accelerators | Optional AVX2, NEON, Metal, or CUDA structural indexing with scalar fallback. |
+
+Iroha code should prefer `norito::json` helpers for typed API payloads. Adding
+plain `serde_json` to production paths risks diverging from the schema and
+field-handling behavior expected by SDKs and Torii extractors.
+
+## Derive Support
+
+Rust data types generally use derive macros rather than manual codec code.
+The derive layer can generate Norito binary codecs, schemas, and JSON helpers.
+
+Common field attributes are:
+
+| Attribute | Effect |
+| --- | --- |
+| `#[norito(rename = "other")]` | Uses a stable serialized name for schema and JSON compatibility. |
+| `#[norito(skip)]` | Omits the field and fills it from `Default` while decoding. |
+| `#[norito(default)]` | Uses `Default` when a decoded payload does not carry the field. |
+| `#[norito(skip_serializing_if = "...")]` | Omits fields from JSON when the predicate matches, while preserving deterministic decoding defaults. |
+
+Derives also expose encoded-length hints and exact-length calculations where
+possible. Encoders use those hints to reserve buffers and avoid extra copies.
+
+## Crate Feature Families
+
+When building Iroha or SDK bindings from source, Norito features select which
+helpers and accelerators are available:
+
+| Feature family | What it enables |
+| --- | --- |
+| `derive` | Re-exported procedural macros for binary, schema, and JSON derives. |
+| `compression` | Zstd support for header-framed payloads. |
+| `packed-seq` | Packed collection layouts using offset tables. |
+| `packed-struct` | Packed derive-generated struct layouts. |
+| `compact-len` | Varint per-value length prefixes. |
+| `columnar` | Experimental Norito Column Blocks for scan-heavy paths. |
+| `strict-safe` | Converts decode panics in fallible paths into structured errors. |
+| `simd-accel` | CPU acceleration where available, with deterministic fallback. |
+| `json` | Native JSON parser, writer, DOM, typed derives, and fast paths. |
+| `json-std-io` | Reader and writer helpers layered on the JSON stack. |
+| `metal-stage1`, `cuda-stage1` | Optional GPU JSON structural-index backends. |
+| `metal-stage2` | Optional Metal metadata classification for the JSON structural tape. |
+| `metal-crc64`, `cuda-crc64` | Optional GPU CRC64 helpers for large payloads. |
+| `gpu-compression` | Optional Metal or CUDA Zstd acceleration for large payloads. |
+| `stage1-validate` | Debug validation that compares accelerated JSON structural indexes against scalar output. |
+
+Feature availability can differ between SDKs and release profiles. The wire
+format remains governed by the header and schema, not by local build flags.
+
+## Torii and Norito RPC
+
+Torii exposes JSON for many operator routes, but typed binary routes use
+Norito. The media type for current typed Norito HTTP bodies is
+`application/x-norito`.
+
+Use these headers when an endpoint accepts or returns typed Norito:
+
+```http
+Content-Type: application/x-norito
+Accept: application/x-norito
+```
+
+For clients that can fall back to JSON during rollout, prefer an explicit
+Accept list:
+
+```http
+Accept: application/x-norito, application/json
+```
+
+Decode failures are surfaced as typed Torii errors and counted by telemetry.
+Common reasons include invalid magic, unsupported version, unsupported feature
+flag, checksum mismatch, malformed UTF-8, invalid enum tag, and schema mismatch.
+
+Norito RPC rollout is usually staged behind transport configuration. Operator
+dashboards should track request latency, failures, active connections,
+response bytes, and `torii_norito_decode_failures_total` separately from JSON
+traffic.
+
+## Norito Streaming
+
+Norito Streaming extends the same deterministic approach to media and realtime
+transport surfaces. Its key pieces are:
+
+| Streaming feature | Purpose |
+| --- | --- |
+| Manifests | Declare segment commitments, privacy routes, capabilities, codec profile, encryption suite, and content key metadata. |
+| Segment headers | Bind segment number, duration, chunk count, timing, entropy mode, audio summary, and Merkle roots. |
+| Chunk commitments | Let viewers and relays verify payload chunks against the manifest before serving or decoding. |
+| Control frames | Carry manifest announcements, feedback, key updates, and capability negotiation. |
+| HPKE key updates | Rotate transport secrets using the negotiated suite and monotonically increasing counters. |
+| Capability negotiation | Intersects supported feature bits, datagram limits, feedback cadence, and privacy requirements. |
+| FEC and feedback | Uses deterministic receiver reports and parity decisions for lossy realtime paths. |
+| Conformance vectors | Cross-language fixtures prove SDKs decode the same manifests, segments, and entropy streams. |
+
+Streaming-specific codecs and entropy profiles are separate from the core
+Norito transaction/query format, but their manifests and control data still use
+Norito so routing, billing, replay, and audit evidence stay reproducible.
+
+## Operational Guidance
+
+- Prefer SDK builders and generated bindings over hand-crafted Norito bytes.
+- Treat schema mismatch as a version or fixture problem, not as a transient
+ network failure.
+- Keep `.nrt`, `.norito`, and manifest artifacts with the release or incident
+ bundle that produced them.
+- Use JSON projections for dashboards and manual inspection, but keep Norito as
+ the source of truth for signed, hashed, or persisted data.
+- When adding a new typed Torii endpoint, document whether it accepts JSON,
+ Norito, or both, and expose the supported content types in `/openapi`.
+- When enabling accelerators, run parity tests against scalar output before
+ rollout. Accelerator failures should fall back cleanly rather than changing
+ payload semantics.
+
+## Related Pages
+
+- [Torii endpoints](/reference/torii-endpoints.md)
+- [Genesis reference](/reference/genesis.md)
+- [Data model schema](/reference/data-model-schema.md)
+- [JavaScript / TypeScript SDK](/guide/tutorials/javascript.md)
+- [Python SDK](/guide/tutorials/python.md)
+- [Swift and iOS SDK](/guide/tutorials/swift.md)
+
+## Upstream References
+
+- [Norito format specification](https://github.com/hyperledger-iroha/iroha/blob/i23-features/norito.md)
+- [Norito crate README](https://github.com/hyperledger-iroha/iroha/blob/i23-features/crates/norito/README.md)
+- [Norito streaming design notes](https://github.com/hyperledger-iroha/iroha/blob/i23-features/docs/source/norito_streaming.md)
diff --git a/src/reference/peer-config/MigrationSnapshotModeTable.vue b/src/reference/peer-config/MigrationSnapshotModeTable.vue
new file mode 100644
index 000000000..8672b1316
--- /dev/null
+++ b/src/reference/peer-config/MigrationSnapshotModeTable.vue
@@ -0,0 +1,27 @@
+
+
+
+
+ Before: creation_enabled
+ After: mode
+
+
+
+
+ false
+ read_write
+
+
+ true
+ readonly
+
+
+
+
+
+
diff --git a/src/reference/peer-config/ParamTable.vue b/src/reference/peer-config/ParamTable.vue
new file mode 100644
index 000000000..2794187a2
--- /dev/null
+++ b/src/reference/peer-config/ParamTable.vue
@@ -0,0 +1,88 @@
+
+
+
+
+
+
+
+ Type:
+
+
+
+
+ String
+
+
+ String, file path (relative to the config file or CWD)
+
+
+ String, public key multihash
+
+
+ String, socket address (host/IPv4/IPv6 + port)
+
+
+ Number
+
+
+ Number, duration in milliseconds
+
+
+ todo {{ type }}
+
+
+
+
+
+
+
+ Default:
+
+
+
+ {{ defaultValue }}
+
+ ({{ defaultNote }})
+
+
+
+
+
+
+
+ Env:
+
+
+ {{ env }}
+
+
+
+
+
+
+
diff --git a/src/reference/peer-config/index.md b/src/reference/peer-config/index.md
new file mode 100644
index 000000000..2c7138078
--- /dev/null
+++ b/src/reference/peer-config/index.md
@@ -0,0 +1,65 @@
+# Configuring Iroha
+
+Local peer configuration is set via environment variables and/or TOML files. Note that this is different from on-chain
+configuration changed through [`SetParameter`](/blockchain/instructions.md#setparameter)
+instructions.
+
+Use [`--config`](../irohad-cli#arg-config) CLI argument to specify the path to the configuration file.
+
+## Template
+
+For a detailed description of each parameter, please refer to the [Parameters](./params.md) reference.
+
+::: details `peer.template.toml`
+
+<<< @/snippets/peer.template.toml
+
+:::
+
+## Composing configuration files
+
+TOML configuration files have an additional `extends` field, pointing to other TOML file(s). It could be a single path or
+multiple paths:
+
+::: code-group
+
+```toml [Single]
+extends = "single-path.toml"
+```
+
+```toml [Multiple]
+extends = ["file1.toml", "file2.toml"]
+```
+
+:::
+
+Iroha will recursively read all files specified in `extends` and compose them into layers, where latter ones overwrite
+previous ones on a parameter level. For example, if reading `config.toml`:
+
+::: code-group
+
+```toml [config.toml]
+extends = ["a.toml", "b.toml"]
+
+[torii]
+address = "0.0.0.0:8080"
+```
+
+```toml [a.toml]
+chain = "whatever"
+```
+
+```toml [b.toml]
+[torii]
+address = "localhost:4000"
+max_content_len = 2048
+```
+
+:::
+
+The resulting configuration will be `chain` from `a.toml`, `max_content_len` from `b.toml`, and `torii.address` from
+`config.toml` (overwrites `b.toml`).
+
+## Troubleshooting
+
+Pass [`--trace-config`](../irohad-cli#arg-trace-config) CLI flag to see a trace of how configuration is read and parsed.
diff --git a/src/reference/peer-config/migration.md b/src/reference/peer-config/migration.md
new file mode 100644
index 000000000..96f6c6fc5
--- /dev/null
+++ b/src/reference/peer-config/migration.md
@@ -0,0 +1,215 @@
+
+
+# Migrate Peer Configuration
+
+Use this page when moving an older Iroha 2 deployment from the pre-RC JSON
+configuration and daemon-side genesis submission flow to the current TOML
+configuration used by the Iroha 2 and Iroha 3 build lines.
+
+Current peers expect:
+
+- `irohad`, `iroha2d`, or `iroha3d` with `--config`
+- TOML configuration files, optionally composed with `extends`
+- a signed genesis block referenced by `genesis.file`
+- on-chain parameters set through genesis instructions or transactions rather
+ than local peer config
+
+For a new local network, prefer `kagami localnet`. Use this page for existing
+deployments that still have old environment variables, JSON config files, or
+unsigned `genesis.json` startup flows.
+
+## CLI and Environment
+
+Here, the **After** column lists the current environment variable when a direct
+replacement exists. Environment variables that are not mentioned in the
+**After** column were removed or moved to on-chain configuration.
+
+| Before | After |
+| ----------------------------------: |--------------------------------------------------------------------|
+| `IROHA2_CONFIG_PATH` | removed, use [`--config`](../irohad-cli#arg-config) instead |
+| `IROHA2_GENESIS_PATH` | [`GENESIS`](params#param-genesis-file) |
+| `IROHA_PUBLIC_KEY` | [`PUBLIC_KEY`](params#param-public-key) |
+| `IROHA_PRIVATE_KEY` | [`PRIVATE_KEY`](params#param-private-key) |
+| `TORII_P2P_ADDR` | [`P2P_ADDRESS`](params#param-network-address) |
+| `IROHA_GENESIS_ACCOUNT_PUBLIC_KEY` | [`GENESIS_PUBLIC_KEY`](params#param-genesis-public-key) |
+| `IROHA_GENESIS_ACCOUNT_PRIVATE_KEY` | removed; genesis block is signed with it outside of Iroha |
+| `TORII_API_URL` | [`API_ADDRESS`](params#param-torii-address) |
+| `KURA_INIT_MODE` | [same](params#param-kura-init-mode) |
+| `KURA_BLOCK_STORE_PATH` | [`KURA_STORE_DIR`](params#param-kura-store-dir) |
+| `KURA_DEBUG_OUTPUT_NEW_BLOCKS` | [same](params#param-kura-debug-output-new-blocks) |
+| `MAX_LOG_LEVEL` | [`LOG_LEVEL`](params#param-logger-level) |
+| `COMPACT_MODE` | removed, see [`LOG_FORMAT`](params#param-logger-format) |
+| `TERMINAL_COLORS` | same, see [`--terminal-colors`](../irohad-cli#arg-terminal-colors) |
+| `SNAPSHOT_CREATION_ENABLED` | removed, see [`SNAPSHOT_MODE`](params#param-snapshot-mode) |
+| `SNAPSHOT_DIR_PATH` | [`SNAPSHOT_STORE_DIR`](params#param-snapshot-store-dir) |
+| `SUMERAGI_TRUSTED_PEERS` | [same](params#param-trusted-peers) |
+| ...all other ones | removed |
+
+## Configuration Parameters
+
+New mandatory parameters:
+
+- [`chain`](params#param-chain-id)
+- [`network.public_address`](params#param-network-public-address)
+
+List of all old parameters:
+
+- Root parameters: see [Root-Level Params](params#root)
+ - `PRIVATE_KEY`: became [`private_key`](params#param-private-key)
+ - `PUBLIC_KEY`: became [`public_key`](params#param-public-key)
+- ~~`BLOCK_SYNC`~~: section removed
+ - ~~`ACTOR_CHANNEL_CAPACITY`~~: removed
+ - `BLOCK_BATCH_SIZE`: became
+ [`network.block_gossip_size`](params#param-network-block-gossip-size)
+ - `GOSSIP_PERIOD_MS`: became
+ [`network.block_gossip_period_ms`](params#param-network-block-gossip-period-ms)
+- ~~`DISABLE_PANIC_TERMINAL_COLORS`~~: removed
+- `GENESIS`: see [Genesis Params](params#genesis)
+ - `ACCOUNT_PRIVATE_KEY`: removed; use it with
+ [`kagami genesis sign`](../genesis.md#sign-the-manifest), then distribute
+ the signed `.nrt` block
+ - `ACCOUNT_PUBLIC_KEY`: became
+ [`genesis.public_key`](params#param-genesis-public-key)
+- `KURA`: see [Kura Params](params#kura)
+ - ~~`ACTOR_CHANNEL_CAPACITY`~~: removed
+ - ~~`BLOCKS_PER_STORAGE_FILE`~~: removed
+ - `BLOCK_STORE_PATH`: became
+ [`kura.store_dir`](params#param-kura-store-dir)
+ - `DEBUG_OUTPUT_NEW_BLOCKS`: became
+ [`kura.debug.output_new_blocks`](params#param-kura-debug-output-new-blocks)
+ - `INIT_MODE`: same, lowercase
+- `LOGGER`: see [Logger Params](params#logger)
+ - ~~`COMPACT_MODE`~~: removed; now might be enabled with
+ [`logger.format = "compact"`](params#param-logger-format)
+ - ~~`LOG_FILE_PATH`~~: removed; use STDOUT redirection instead and enable
+ JSON format with [`logger.format = "json"`](params#param-logger-format)
+ - `MAX_LOG_LEVEL`: became [`logger.log_level`](params#param-logger-level)
+ - ~~`TELEMETRY_CAPACITY`~~: removed
+ - ~~`TERMINAL_COLORS`~~: removed; use [`--terminal-colors`](../irohad-cli#arg-terminal-colors)
+ instead
+- `NETWORK`: see [Network Params](params#network), some parameters migrated
+ here
+ - ~~`ACTOR_CHANNEL_CAPACITY`~~: removed
+- `QUEUE`: see [Queue Params](params#queue)
+ - `FUTURE_THRESHOLD_MS`: removed
+ - `MAX_TRANSACTIONS_IN_QUEUE`: became
+ [`queue.capacity`](params#param-queue-capacity)
+ - `MAX_TRANSACTIONS_IN_QUEUE_PER_USER`: became
+ [`queue.capacity_per_user`](params#param-queue-capacity-per-user)
+ - `TRANSACTION_TIME_TO_LIVE_MS`: became
+ [`queue.transaction_time_to_live`](params#param-queue-transaction-time-to-live-ms)
+- `SNAPSHOT`: see [Snapshot Params](params#snapshot)
+ - `CREATE_EVERY_MS`: became
+ [`snapshot.create_every_ms`](params#param-snapshot-create-every-ms)
+ - `CREATION_ENABLED`: removed in favour of
+ [`snapshot.mode`](params#param-snapshot-mode); see the mapping:
+
+ - `DIR_PATH`: became
+ [`snapshot.store_dir`](params#param-snapshot-store-dir)
+- `SUMERAGI`: see [Sumeragi Params](params#sumeragi)
+ - ~~`ACTOR_CHANNEL_CAPACITY`~~: removed
+ - ~~`BLOCK_TIME_MS`~~: removed[^1]
+ - ~~`COMMIT_TIME_LIMIT_MS`~~: removed[^1]
+ - `GOSSIP_BATCH_SIZE`: became
+ [`network.transaction_gossip_size`](params#param-network-transaction-gossip-size)
+ - `GOSSIP_PERIOD_MS`: became
+ [`network.transaction_gossip_period_ms`](params#param-network-transaction-gossip-period-ms)
+ - ~~`KEY_PAIR`~~: removed
+ - ~~`MAX_TRANSACTIONS_IN_BLOCK`~~: removed[^1]
+ - ~~`PEER_ID`~~: removed
+ - `TRUSTED_PEERS`: [same, lowercase](params#param-trusted-peers)
+- `TELEMETRY`: see [Telemetry Params](params#telemetry)
+ - `FILE`: became [`dev_telemetry.out_file`](./params.md#param-dev-telemetry-out-file)
+ - `MAX_RETRY_DELAY_EXPONENT`: same, lowercase
+ - `MIN_RETRY_PERIOD`: same, lowercase
+ - `NAME`: same, lowercase
+ - `URL`: same, lowercase
+- `TORII`: see [Torii Params](params#torii)
+ - `API_URL`: became [`torii.address`](params#param-torii-address)
+ - ~~`FETCH_SIZE`~~: removed
+ - `MAX_CONTENT_LEN`: same, lowercase
+ - ~~`MAX_TRANSACTION_SIZE`~~: removed
+ - `P2P_ADDR`: became [`network.address`](params#param-network-address)
+ - `QUERY_IDLE_TIME_MS`: became `torii.query_idle_time`
+- ~~`WSV`~~: removed[^1]
+
+[^1]: on-chain configuration moved out of the local configuration file. Use
+ [`SetParameter`](/blockchain/instructions.md#setparameter) for parameter
+ changes that must be part of ledger state.
+
+## Example
+
+**Complete setup before:**
+
+::: code-group
+
+```shell [CLI]
+export IROHA2_CONFIG=./config.json
+export IROHA2_GENESIS=./genesis.json
+
+iroha --config ./config.json
+```
+
+```json [Configuration file]
+{
+ "PUBLIC_KEY": "ed01201C61FAF8FE94E253B93114240394F79A607B7FA55F9E5A41EBEC74B88055768B",
+ "PRIVATE_KEY": {
+ "digest_function": "ed25519",
+ "payload": "282ED9F3CF92811C3818DBC4AE594ED59DC1A2F78E4241E31924E101D6B1FB831C61FAF8FE94E253B93114240394F79A607B7FA55F9E5A41EBEC74B88055768B"
+ },
+ "TORII": {
+ "API_URL": "127.0.0.1:8080",
+ "P2P_ADDR": "127.0.0.1:1337"
+ },
+ "GENESIS": {
+ "ACCOUNT_PUBLIC_KEY": "ed01203F4E3E98571B55514EDC5CCF7E53CA7509D89B2868E62921180A6F57C2F4E255",
+ "ACCOUNT_PRIVATE_KEY": {
+ "digest_function": "ed25519",
+ "payload": "038AE16B219DA35AA036335ED0A43C28A2CC737150112C78A7B8034B9D99C9023F4E3E98571B55514EDC5CCF7E53CA7509D89B2868E62921180A6F57C2F4E255"
+ }
+ },
+ "KURA": {
+ "BLOCK_STORE_PATH": "./storage"
+ }
+}
+```
+
+:::
+
+**Complete setup after:**
+
+::: code-group
+
+```shell [CLI]
+cargo run -p iroha_kagami -- genesis sign ./genesis.json \
+ --private-key "038AE16B219DA35AA036335ED0A43C28A2CC737150112C78A7B8034B9D99C9023F4E3E98571B55514EDC5CCF7E53CA7509D89B2868E62921180A6F57C2F4E255" \
+ --out-file ./genesis.signed.nrt
+
+irohad --config ./iroha.toml
+```
+
+```toml [Configuration file]
+chain = "00000000-0000-0000-0000-000000000000"
+public_key = "ea01309060D021340617E9554CCBC2CF3CC3DB922A9BA323ABDF7C271FCC6EF69BE7A8DEBCA7D9E96C0F0089ABA22CDAADE4A2"
+private_key = "8926201CA347641228C3B79AA43839DEDC85FA51C0E8B9B6A00F6B0D6B0423E902973F"
+trusted_peers = []
+trusted_peers_pop = []
+
+[network]
+address = "127.0.0.1:1337"
+public_address = "127.0.0.1:1337"
+
+[torii]
+address = "127.0.0.1:8080"
+
+[kura]
+store_dir = "./storage"
+
+[genesis]
+public_key = "ed01203F4E3E98571B55514EDC5CCF7E53CA7509D89B2868E62921180A6F57C2F4E255"
+file = "./genesis.signed.nrt"
+```
+
+:::
diff --git a/src/reference/peer-config/params.md b/src/reference/peer-config/params.md
new file mode 100644
index 000000000..ab9d79e22
--- /dev/null
+++ b/src/reference/peer-config/params.md
@@ -0,0 +1,865 @@
+---
+outline: [ 2, 3 ]
+---
+
+
+
+# Configuration Parameters
+
+[[toc]]
+
+## Root-Level {#root}
+
+### `chain` {#param-chain-id}
+
+Chain ID that must be included in each transaction. Used to prevent replay attacks.
+
+A replay attack is an attempt to submit a valid transaction to a different
+network than the one it was intended for. Because the `chain` is part of
+the signed transaction payload, a transaction signed for one chain is rejected
+by peers that use another chain ID.
+
+
+
+::: code-group
+
+```toml [Config File]
+chain = "00000000-0000-0000-0000-000000000000"
+```
+
+```shell [Environment]
+CHAIN="00000000-0000-0000-0000-000000000000"
+```
+
+:::
+
+### `public_key` {#param-public-key}
+
+Public key of the peer. Consensus validator peers must use BLS-Normal keys.
+
+
+
+::: code-group
+
+```toml [Config File]
+public_key = "ea01309060D021340617E9554CCBC2CF3CC3DB922A9BA323ABDF7C271FCC6EF69BE7A8DEBCA7D9E96C0F0089ABA22CDAADE4A2"
+```
+
+```shell [Environment]
+PUBLIC_KEY="ea01309060D021340617E9554CCBC2CF3CC3DB922A9BA323ABDF7C271FCC6EF69BE7A8DEBCA7D9E96C0F0089ABA22CDAADE4A2"
+```
+
+:::
+
+### `private_key` {#param-private-key}
+
+Private key of the peer. It must match `public_key`; consensus validator peers
+must use BLS-Normal keys.
+
+
+
+::: code-group
+
+```toml [Config File]
+private_key = "8926201CA347641228C3B79AA43839DEDC85FA51C0E8B9B6A00F6B0D6B0423E902973F"
+```
+
+```shell [Environment]
+PRIVATE_KEY="8926201CA347641228C3B79AA43839DEDC85FA51C0E8B9B6A00F6B0D6B0423E902973F"
+```
+
+:::
+
+### `trusted_peers` {#param-trusted-peers}
+
+List of predefined trusted peers.
+
+Consensus validators must use BLS-Normal peer keys. For each validator, also
+provide a matching [`trusted_peers_pop`](#param-trusted-peers-pop) entry.
+
+
+
+
+Array of peer strings. Use `PUBLIC_KEY@ADDRESS` when the P2P address is known;
+bare `PUBLIC_KEY` is also accepted and lets the peer address be discovered from
+gossip.
+
+
+
+
+::: code-group
+
+```toml [Config File]
+trusted_peers = [
+ "ea01309060D021340617E9554CCBC2CF3CC3DB922A9BA323ABDF7C271FCC6EF69BE7A8DEBCA7D9E96C0F0089ABA22CDAADE4A2@127.0.0.1:1337",
+ "ea0130A7E9D016D723F72942FCF4B988FB599EA0E092F73C8B68E69F4E8B3FE542A3F7E48AD6CD15F3EB484E45F79399071F77@127.0.0.1:1338",
+]
+```
+
+```shell [Environment]
+# as JSON
+TRUSTED_PEERS='[
+ "ea01309060D021340617E9554CCBC2CF3CC3DB922A9BA323ABDF7C271FCC6EF69BE7A8DEBCA7D9E96C0F0089ABA22CDAADE4A2@127.0.0.1:1337",
+ "ea0130A7E9D016D723F72942FCF4B988FB599EA0E092F73C8B68E69F4E8B3FE542A3F7E48AD6CD15F3EB484E45F79399071F77@127.0.0.1:1338"
+]'
+```
+
+:::
+
+### `trusted_peers_pop` {#param-trusted-peers-pop}
+
+BLS proof-of-possession entries for validator trusted peers.
+
+
+
+
+Array of objects with `public_key` and `pop_hex` fields
+
+
+
+
+::: code-group
+
+```toml [Config File]
+trusted_peers_pop = [
+ { public_key = "ea01309060D021340617E9554CCBC2CF3CC3DB922A9BA323ABDF7C271FCC6EF69BE7A8DEBCA7D9E96C0F0089ABA22CDAADE4A2", pop_hex = "8515da750f81182aaba5c22fc9f03a01e81ed85e4495a2ca6b29a71c0c8549537e31e79cddf6ff285b9e22d0d9dc17ce0f46e7d0cf78b2ef9feab50c849a1ea8e1e4f07e966f6113faa8a999317545d9f111b8e08a7273913710b43a20b19c08" },
+ { public_key = "ea0130A7E9D016D723F72942FCF4B988FB599EA0E092F73C8B68E69F4E8B3FE542A3F7E48AD6CD15F3EB484E45F79399071F77", pop_hex = "a14eb180f0d78c55d2c034e91ccf691378e9c3ceed8e0b81d3e4b7c215c0dbb633bb9f1c5063911c31af4610016c164015f0f93db3c7df6a2ad0c39338fe7695b976a59fd13797615f229fbd77276a8bb2842e4e44fadcafdb7b37f4a143b913" },
+]
+```
+
+```shell [Environment]
+# as JSON
+TRUSTED_PEERS_POP='[
+ {"public_key":"ea01309060D021340617E9554CCBC2CF3CC3DB922A9BA323ABDF7C271FCC6EF69BE7A8DEBCA7D9E96C0F0089ABA22CDAADE4A2","pop_hex":"0x8515da750f81182aaba5c22fc9f03a01e81ed85e4495a2ca6b29a71c0c8549537e31e79cddf6ff285b9e22d0d9dc17ce0f46e7d0cf78b2ef9feab50c849a1ea8e1e4f07e966f6113faa8a999317545d9f111b8e08a7273913710b43a20b19c08"},
+ {"public_key":"ea0130A7E9D016D723F72942FCF4B988FB599EA0E092F73C8B68E69F4E8B3FE542A3F7E48AD6CD15F3EB484E45F79399071F77","pop_hex":"0xa14eb180f0d78c55d2c034e91ccf691378e9c3ceed8e0b81d3e4b7c215c0dbb633bb9f1c5063911c31af4610016c164015f0f93db3c7df6a2ad0c39338fe7695b976a59fd13797615f229fbd77276a8bb2842e4e44fadcafdb7b37f4a143b913"}
+]'
+```
+
+:::
+
+## Genesis {#genesis}
+
+### `genesis.file` {#param-genesis-file}
+
+File path to the signed genesis block payload generated by `kagami genesis sign`.
+Generated profiles commonly write this as a Norito `.nrt` file.
+
+
+
+::: code-group
+
+```toml [Config File]
+[genesis]
+file = "./genesis.signed.nrt"
+```
+
+```shell [Environment]
+GENESIS="./genesis.signed.nrt"
+```
+
+:::
+
+### `genesis.public_key` {#param-genesis-public-key}
+
+Public key of the genesis key pair.
+
+
+
+::: code-group
+
+```toml [Config File]
+[genesis]
+public_key = "ed01208BA62848CF767D72E7F7F4B9D2D7BA07FEE33760F79ABE5597A51520E292A0CB"
+```
+
+```shell [Environment]
+GENESIS_PUBLIC_KEY="ed01208BA62848CF767D72E7F7F4B9D2D7BA07FEE33760F79ABE5597A51520E292A0CB"
+```
+
+:::
+
+## Network {#network}
+
+### `network.address` {#param-network-address}
+
+Address for p2p communication for consensus (sumeragi) and block synchronization (block_sync) purposes.
+
+
+
+::: code-group
+
+```toml [Config File]
+[network]
+address = "0.0.0.0:1337"
+```
+
+```shell [Environment]
+P2P_ADDRESS=0.0.0.0:1337
+```
+
+:::
+
+### `network.public_address` {#param-network-public-address}
+
+Peer-to-peer address (external, as seen by other peers).
+
+Will be gossiped to connected peers so that they can gossip it to other peers.
+
+
+
+::: code-group
+
+```toml [Config File]
+[network]
+public_address = "0.0.0.0:5000"
+```
+
+```shell [Environment]
+P2P_PUBLIC_ADDRESS=0.0.0.0:5000
+```
+
+:::
+
+### `network.block_gossip_size` {#param-network-block-gossip-size}
+
+The amount of blocks that can be sent in a single synchronization message.
+
+
+
+::: code-group
+
+```toml [Config File]
+[network]
+block_gossip_size = 256
+```
+
+:::
+
+### `network.block_gossip_period_ms` {#param-network-block-gossip-period-ms}
+
+The time interval between requests to peers for the most recent block.
+
+More frequent gossiping shortens the time to sync, but can overload the network.
+
+
+
+::: code-group
+
+```toml [Config File]
+[network]
+block_gossip_period_ms = 1_000
+```
+
+:::
+
+### `network.transaction_gossip_size` {#param-network-transaction-gossip-size}
+
+Max number of transactions in a gossip batch message.
+
+Smaller size leads to longer time to synchronise, but useful if you have high packet loss.
+
+
+
+::: code-group
+
+```toml [Config File]
+[network]
+transaction_gossip_size = 256
+```
+
+:::
+
+### `network.transaction_gossip_period_ms` {#param-network-transaction-gossip-period-ms}
+
+Period of gossiping pending transaction between peers.
+
+More frequent gossiping shortens the time to sync, but can overload the network.
+
+
+
+::: code-group
+
+```toml [Config File]
+[network]
+transaction_gossip_period_ms = 5_000
+```
+
+:::
+
+### `network.idle_timeout_ms` {#param-network-idle-timeout-ms}
+
+Duration of time after which connection with peer is terminated if peer is idle.
+
+
+
+::: code-group
+
+```toml [Config File]
+[network]
+idle_timeout_ms = 300_000
+```
+
+:::
+
+## Torii {#torii}
+
+### `torii.address` {#param-torii-address}
+
+Address to which the Torii server must listen and to which the client(s) make their requests.
+
+
+
+::: code-group
+
+```toml [Config File]
+[torii]
+address = "0.0.0.0:8080"
+```
+
+```shell [Environment]
+API_ADDRESS=0.0.0.0:8080
+```
+
+:::
+
+### `torii.max_content_len` {#param-torii-max-content-len}
+
+The maximum number of bytes in a raw request body accepted by the
+[Torii endpoints](/reference/torii-endpoints.md).
+
+This limit is used to prevent DOS attacks.
+
+
+
+
+Number (of bytes)
+
+
+
+
+`64_000_000` (64 million bytes)
+
+
+
+
+::: code-group
+
+```toml [Config File]
+[torii]
+max_content_len = 64_000_000
+```
+
+:::
+
+### `torii.query_idle_time_ms` {#param-torii-query-idle-time-ms}
+
+The time a query can remain in the store if unaccessed.
+
+
+
+::: code-group
+
+```toml [Config File]
+[torii]
+query_idle_time_ms = 10_000
+```
+
+:::
+
+### `torii.query_store_capacity` {#param-torii-query-store-capacity}
+
+The upper limit of the number of live queries.
+
+
+
+::: code-group
+
+```toml [Config File]
+[torii]
+query_store_capacity = 128
+```
+
+:::
+
+### `torii.query_store_capacity_per_user` {#param-torii-query-store-capacity-per-user}
+
+The upper limit of the number of live queries for a single user.
+
+
+
+::: code-group
+
+```toml [Config File]
+[torii]
+query_store_capacity_per_user = 128
+```
+
+:::
+
+## Logger {#logger}
+
+### `logger.level` {#param-logger-level}
+
+_General_ logging verbosity (see [`logger.filter`](#param-logger-filter) for refined configuration).
+
+
+
+
+String, possible values:
+
+- `TRACE`: All events, including low-level operations.
+- `DEBUG`: Debug-level messages, useful for diagnostics.
+- `INFO`: General informational messages.
+- `WARN`: Warnings that indicate potential issues.
+- `ERROR`: Errors that disrupt normal function but allow continued operation.
+
+Choose the level that best suits your use case. Refer to
+[Stack Overflow](https://stackoverflow.com/questions/2031163/when-to-use-the-different-log-levels) for additional
+details on how to use different log levels.
+
+
+
+
+::: code-group
+
+```toml [Config File]
+[logger]
+level = "INFO"
+```
+
+```shell [Environment]
+LOG_LEVEL=INFO
+```
+
+:::
+
+::: tip Runtime update
+
+This parameter is subject to runtime configuration update through Torii operator endpoints.
+
+:::
+
+### `logger.filter` {#param-logger-filter}
+
+Refined log filters in addition to [`logger.level`](#param-logger-level). Allows to customize logging verbosity
+per-_target_.
+
+
+
+
+String, consists of one or more comma-separated directives. Each directive may have a corresponding maximum verbosity
+_level_ which enables (e.g., _selects for_) spans and events that match. Iroha considers less exclusive levels (like
+`trace` or `info`) to be more verbose than more exclusive levels (like `error` or `warn`).
+
+At a high level, the syntax for directives consists of several parts:
+
+```
+target[span{field=value}]=level
+```
+
+For more details, see
+[`tracing-subscriber` documentation](https://docs.rs/tracing-subscriber/latest/tracing_subscriber/filter/struct.EnvFilter.html).
+
+
+
+
+
+::: code-group
+
+```toml [Config File]
+[logger]
+filter = "iroha_core=debug,iroha_p2p=debug"
+```
+
+```shell [Environment]
+LOG_FILTER=iroha_core=debug,iroha_p2p=debug
+```
+
+:::
+
+::: info Compatibility with [`logger.level`](#param-logger-level)
+
+`logger.filter` works _together_ with [`logger.level`](#param-logger-level) and neither one overwrites another one.
+
+For example, if `logger.level` is set to `INFO` and `logger.filter` is set to `iroha_core=debug`, the resulting filter
+set will be `info,iroha_core=debug` (i.e. `info` for all modules, `debug` for `iroha_core`).
+
+:::
+
+::: tip Runtime update
+
+This parameter is subject to runtime configuration update through Torii operator endpoints.
+
+:::
+
+### `logger.format` {#param-logger-format}
+
+Logs format.
+
+
+
+
+String, possible values:
+
+- `full`: The default formatter. This emits human-readable, single-line logs for each event that occurs, with the
+ current span context displayed before the formatted representation of the event.
+- `compact`: A variant of the default formatter, optimized for short line lengths. Fields from the current span context
+ are appended to the fields of the formatted event, and span names are not shown; the verbosity level is abbreviated to
+ a single character.
+- `pretty`: Emits excessively pretty, multi-line logs, optimized for human readability. This is primarily intended to be
+ used in local development and debugging, or for command-line applications, where automated analysis and compact
+ storage of logs is less of a priority than readability and visual appeal.
+- `json`: Outputs newline-delimited JSON logs. This is intended for production use with systems where structured logs
+ are consumed as JSON by analysis and viewing tools. The JSON output is not optimized for human readability.
+
+For more details and sample outputs, see
+[`tracing-subscriber` documentation](https://docs.rs/tracing-subscriber/latest/tracing_subscriber/fmt/format/index.html).
+
+
+
+
+::: code-group
+
+```toml [Config File]
+[logger]
+format = "json"
+```
+
+```shell [Environment]
+LOG_FORMAT=json
+```
+
+:::
+
+## Kura {#kura}
+
+_Kura_ is the persistent storage engine of Iroha (Japanese for _warehouse_).
+
+### `kura.blocks_in_memory` {#param-kura-blocks-in-memory}
+
+At most N last blocks will be stored in memory.
+
+Older blocks will be dropped from memory and loaded from the disk if they are needed.
+
+
+
+::: code-group
+
+```toml [Config File]
+[kura]
+blocks_in_memory = 1024
+```
+
+```shell [Environment]
+KURA_BLOCKS_IN_MEMORY=1024
+```
+
+:::
+
+### `kura.init_mode` {#param-kura-init-mode}
+
+Kura initialisation mode
+
+
+
+
+String, possible values:
+
+- `strict`: strict validation of all blocks
+- `fast`: Fast initialisation with only basic checks
+
+
+
+
+::: code-group
+
+```toml [Config File]
+[kura]
+init_mode = "fast"
+```
+
+```shell [Environment]
+KURA_INIT_MODE=fast
+```
+
+:::
+
+### `kura.store_dir` {#param-kura-store-dir}
+
+Specifies the directory[^paths] where the blocks are stored.
+
+See also: [`snapshot.store_dir`](#param-snapshot-store-dir).
+
+
+
+::: code-group
+
+```toml [Config File]
+[kura]
+store_dir = "/path/to/storage"
+```
+
+```shell [Environment]
+KURA_STORE_DIR=/path/to/storage
+```
+
+:::
+
+### `kura.debug.output_new_blocks` {#param-kura-debug-output-new-blocks}
+
+Flag to enable printing new blocks to console.
+
+
+
+::: code-group
+
+```toml [Config File]
+[kura.debug]
+output_new_blocks = true
+```
+
+```shell [Environment]
+KURA_DEBUG_OUTPUT_NEW_BLOCKS=true
+```
+
+:::
+
+## Queue {#queue}
+
+### `queue.capacity` {#param-queue-capacity}
+
+The upper limit of the number of transactions waiting in the queue.
+
+
+
+::: code-group
+
+```toml [Config File]
+[queue]
+capacity = 1_048_576
+```
+
+:::
+
+### `queue.capacity_per_user` {#param-queue-capacity-per-user}
+
+The upper limit of the number of transactions waiting in the queue for a single user.
+
+Use this option to apply throttling.
+
+
+
+::: code-group
+
+```toml [Config File]
+[queue]
+capacity_per_user = 1_048_576
+```
+
+:::
+
+### `queue.transaction_time_to_live_ms` {#param-queue-transaction-time-to-live-ms}
+
+The transaction will be dropped after this time if it is still in the queue.
+
+
+
+::: code-group
+
+```toml [Config File]
+[queue]
+transaction_time_to_live_ms = 43_200_000
+```
+
+:::
+
+## Sumeragi {#sumeragi}
+
+### `sumeragi.debug.force_soft_fork` {#param-sumeragi-debug-force-soft-fork}
+
+Debug-only switch for exercising Sumeragi soft-fork handling paths. Leave this
+disabled outside controlled tests; changing it on a running production network
+can make peers disagree about consensus behavior.
+
+
+
+::: code-group
+
+```toml [Config File]
+[sumeragi.debug]
+force_soft_fork = true
+```
+
+:::
+
+## Snapshot {#snapshot}
+
+This module is responsible for reading and writing snapshots of the
+[World State View](/blockchain/world#world-state-view-wsv).
+
+Snapshots store a serialized checkpoint of the World State View so a peer can
+restart without replaying every block from Kura. Kura remains the durable block
+history and the source of truth for replay; snapshots are an acceleration path.
+On startup, Iroha checks snapshot metadata against the configured chain and the
+stored blocks before deciding whether to load a snapshot or fall back to replay.
+
+::: tip Wipe Snapshots
+
+In case if something is wrong with the snapshots system, and you want to start from a blank page (in terms of
+snapshots), you could remove the directory specified by [`snapshot.store_dir`](#param-snapshot-store-dir).
+
+:::
+
+### `snapshot.mode` {#param-snapshot-mode}
+
+The mode the Snapshot system functions in.
+
+
+
+
+String, possible values:
+
+- `read_write`: Iroha creates snapshots with a period specified by
+ [`snapshot.create_every_ms`](#param-snapshot-create-every-ms). On startup, Iroha reads an existing snapshot (if any)
+ and verifies that it is up-to-date with the blocks storage.
+- `readonly`: Similar to `read_write` but Iroha doesn't create any snapshots.
+- `disabled`: Iroha neither creates new snapshots nor reads an existing one on startup.
+
+
+
+
+::: code-group
+
+```toml [Config File]
+[snapshot]
+mode = "readonly"
+```
+
+```shell [Environment]
+SNAPSHOT_MODE=readonly
+```
+
+:::
+
+### `snapshot.create_every_ms` {#param-snapshot-create-every-ms}
+
+Frequency of snapshots.
+
+
+
+::: code-group
+
+```toml [Config File]
+[snapshot]
+create_every_ms = 60_000
+```
+
+:::
+
+### `snapshot.store_dir` {#param-snapshot-store-dir}
+
+Directory where to store snapshots.
+
+See also: [`kura.store_dir`](#param-kura-store-dir)
+
+
+
+::: code-group
+
+```toml [Config File]
+[snapshot]
+store_dir = "/path/to/storage"
+```
+
+```shell [Environment]
+SNAPSHOT_STORE_DIR="/path/to/storage"
+```
+
+:::
+
+## Telemetry {#telemetry}
+
+Telemetry exports peer diagnostics to an external telemetry collector. Configure
+both `telemetry.name` and `telemetry.url` when a peer should report to a
+collector; omit the section when telemetry is not used.
+
+`name` and `url` must be paired.
+
+All `telemetry` section is optional.
+
+### `telemetry.name` {#param-telemetry-name}
+
+The node's name to be displayed on the telemetry.
+
+
+
+::: code-group
+
+```toml [Config File]
+[telemetry]
+name = "iroha"
+```
+
+:::
+
+### `telemetry.url` {#param-telemetry-url}
+
+The WebSocket URL of the telemetry collector.
+
+
+
+::: code-group
+
+```toml [Config File]
+[telemetry]
+url = "ws://telemetry.example.com/submit"
+```
+
+:::
+
+### `telemetry.min_retry_period_ms` {#param-telemetry-min-retry-period-ms}
+
+The minimum period of time to wait before reconnecting.
+
+
+
+::: code-group
+
+```toml [Config File]
+[telemetry]
+min_retry_period_ms = 5_000
+```
+
+:::
+
+### `telemetry.max_retry_delay_exponent` {#param-telemetry-max-retry-delay-exponent}
+
+The maximum exponent of 2 that is used for increasing delay between reconnections.
+
+
+
+::: code-group
+
+```toml [Config File]
+[telemetry]
+max_retry_delay_exponent = 4
+```
+
+:::
+
+### `dev_telemetry.out_file` {#param-dev-telemetry-out-file}
+
+The filepath to write dev-telemetry to
+
+
+
+::: code-group
+
+```toml [Config File]
+[dev_telemetry]
+out_file = "/path/to/file.json"
+```
+
+:::
diff --git a/src/reference/permissions.md b/src/reference/permissions.md
new file mode 100644
index 000000000..f234d8d9d
--- /dev/null
+++ b/src/reference/permissions.md
@@ -0,0 +1,57 @@
+# Permission Tokens
+
+This page lists the default permission-token types exposed by the current
+Iroha executor data model. For the conceptual guide to roles and permissions,
+see [Permissions](/blockchain/permissions.md).
+
+Permission checks are enforced by the active runtime validator. The token type
+names below describe the standard policy surface, but a network can customize
+runtime validation by upgrading the executor.
+
+## Default Tokens
+
+| Permission token | Category | Operation |
+| --- | --- | --- |
+| `CanManagePeers` | Peer | Register, unregister, or otherwise manage peers. |
+| `CanManageLaneRelayEmergency` | Peer | Manage emergency lane-relay controls. |
+| `CanRegisterDomain` | Domain | Register a domain. |
+| `CanUnregisterDomain` | Domain | Unregister a domain. |
+| `CanModifyDomainMetadata` | Domain | Modify domain metadata. |
+| `CanRegisterAccount` | Account | Register an account. |
+| `CanUnregisterAccount` | Account | Unregister an account. |
+| `CanModifyAccountMetadata` | Account | Modify account metadata. |
+| `CanUnregisterAssetDefinition` | Asset definition | Unregister an asset definition. |
+| `CanModifyAssetDefinitionMetadata` | Asset definition | Modify asset-definition metadata. |
+| `CanMintAssetWithDefinition` | Asset | Mint assets for a specific definition. |
+| `CanBurnAssetWithDefinition` | Asset | Burn assets for a specific definition. |
+| `CanTransferAssetWithDefinition` | Asset | Transfer assets for a specific definition. |
+| `CanMintAsset` | Asset | Mint a specific asset balance. |
+| `CanBurnAsset` | Asset | Burn a specific asset balance. |
+| `CanTransferAsset` | Asset | Transfer a specific asset balance. |
+| `CanRegisterNft` | NFT | Register an NFT. |
+| `CanUnregisterNft` | NFT | Unregister an NFT. |
+| `CanTransferNft` | NFT | Transfer an NFT. |
+| `CanModifyNftMetadata` | NFT | Modify NFT metadata. |
+| `CanSetParameters` | Parameters | Set on-chain configuration parameters. |
+| `CanManageRoles` | Roles | Register, unregister, grant, or revoke roles. |
+| `CanRegisterTrigger` | Trigger | Register a trigger. |
+| `CanExecuteTrigger` | Trigger | Execute a trigger. |
+| `CanUnregisterTrigger` | Trigger | Unregister a trigger. |
+| `CanModifyTrigger` | Trigger | Modify trigger configuration. |
+| `CanModifyTriggerMetadata` | Trigger | Modify trigger metadata. |
+| `CanUpgradeExecutor` | Executor | Upgrade the runtime executor. |
+| `CanRegisterSmartContractCode` | Smart contract | Register smart contract code. |
+| `CanUseFeeSponsor` | Nexus | Charge Nexus fees to a specified sponsor account. |
+
+## Ownership
+
+Owner-sensitive permission tokens must reference the canonical object IDs used
+by the current data model. For example, account permissions refer to canonical
+domainless account IDs, domain permissions refer to `domain.dataspace` domain
+IDs, and asset permissions refer to canonical asset definition or asset IDs.
+
+When a transaction fails with an authorization error, verify both sides:
+
+- the account signing the transaction is the expected canonical account
+- the permission token or role was granted for the exact object ID used in the
+ instruction
diff --git a/src/reference/queries.md b/src/reference/queries.md
new file mode 100644
index 000000000..5e2f59aeb
--- /dev/null
+++ b/src/reference/queries.md
@@ -0,0 +1,119 @@
+# Queries
+
+Iroha queries read ledger state without mutating it. The current data model
+exposes two broad query shapes:
+
+- **singular queries**, which return one object or one value
+- **iterable queries**, which return a stream or collection and can be combined
+ with filtering, sorting, projection, and pagination where the query type
+ supports it
+
+Use SDK typed builders or the CLI instead of constructing query envelopes by
+hand. The names below are the current query types exposed by
+`iroha_data_model::query`.
+
+## Runtime and Configuration
+
+| Query | Purpose |
+| --- | --- |
+| `FindAbiVersion` | Return the executor ABI version. |
+| `FindExecutorDataModel` | Return the executor data-model description. |
+| `FindParameters` | Return on-chain executor configuration parameters. |
+
+## Accounts and Permissions
+
+| Query | Purpose |
+| --- | --- |
+| `FindAccountById` | Find one account by canonical account ID. |
+| `FindAccountByAlias` | Resolve an account alias to an account. |
+| `FindAccounts` | List registered accounts. |
+| `FindAccountIds` | List registered account IDs. |
+| `FindAccountsWithAsset` | List accounts that hold a given asset definition. |
+| `FindAliasesByAccountId` | List aliases bound to an account. |
+| `FindAccountRecoveryPolicyByAlias` | Find the recovery policy for an alias. |
+| `FindAccountRecoveryRequestByAlias` | Find the recovery request for an alias. |
+| `FindRoles` | List roles. |
+| `FindRoleIds` | List role IDs. |
+| `FindRolesByAccountId` | List roles granted to an account. |
+| `FindPermissionsByAccountId` | List permissions granted to an account. |
+
+## Domains and Peers
+
+| Query | Purpose |
+| --- | --- |
+| `FindDomainById` | Find one domain by `DomainId`. |
+| `FindDomains` | List registered domains. |
+| `FindDomainsByAccountId` | List domains owned by an account. |
+| `FindDomainEndorsements` | List domain endorsement records. |
+| `FindDomainEndorsementPolicy` | Return the domain endorsement policy. |
+| `FindDomainCommittee` | Return the domain committee. |
+| `FindPeers` | List trusted peers known to the ledger. |
+
+## Assets, NFTs, and RWAs
+
+| Query | Purpose |
+| --- | --- |
+| `FindAssets` | List asset balances. |
+| `FindAssetsDefinitions` | List asset definitions. |
+| `FindAssetsByAccountId` | List assets held by an account. |
+| `FindAssetById` | Find one asset balance by `AssetId`. |
+| `FindAssetDefinitionById` | Find one asset definition by ID. |
+| `FindNfts` | List NFTs. |
+| `FindNftsByAccountId` | List NFTs owned by an account. |
+| `FindRwas` | List registered real-world-asset lots. |
+
+## Escrow and Proof Records
+
+| Query | Purpose |
+| --- | --- |
+| `FindAssetEscrows` | List asset escrow records. |
+| `FindAssetEscrowById` | Find one asset escrow by ID. |
+| `FindAssetEscrowsBySeller` | List asset escrows by seller. |
+| `FindAssetEscrowsByBuyer` | List asset escrows by buyer. |
+| `FindAssetEscrowsByStatus` | List asset escrows by status. |
+| `FindAnonymousAssetEscrows` | List anonymous asset escrow records. |
+| `FindAnonymousAssetEscrowById` | Find one anonymous asset escrow by ID. |
+| `FindAnonymousAssetEscrowsBySeller` | List anonymous escrows by seller. |
+| `FindAnonymousAssetEscrowsByBuyer` | List anonymous escrows by buyer. |
+| `FindAnonymousAssetEscrowsByStatus` | List anonymous escrows by status. |
+| `FindProofRecordById` | Find one proof record by ID. |
+| `FindProofRecords` | List proof records. |
+| `FindProofRecordsByBackend` | List proof records for a proof backend. |
+| `FindProofRecordsByStatus` | List proof records by status. |
+
+## Nexus, Data Availability, and Packages
+
+| Query | Purpose |
+| --- | --- |
+| `FindRepoAgreements` | List repository agreements stored on-chain. |
+| `FindTwitterBindingByHash` | Resolve a Twitter binding by hash. |
+| `FindDaPinIntentByTicket` | Find a data-availability pin intent by ticket. |
+| `FindDaPinIntentByManifest` | Find a pin intent by manifest reference. |
+| `FindDaPinIntentByAlias` | Find a pin intent by alias. |
+| `FindDaPinIntentByLaneEpochSequence` | Find a pin intent by lane, epoch, and sequence. |
+| `FindLaneRelayEnvelopeByRef` | Find a verified lane-relay envelope. |
+| `FindSorafsProviderOwner` | Resolve the owner of a SoraFS provider. |
+| `FindDataspaceNameOwnerById` | Resolve a dataspace-name owner. |
+| `FindMusubiReleaseByRef` | Find a Musubi release by reference. |
+| `FindMusubiPackageVersions` | List versions for a Musubi package. |
+| `FindMusubiPackageReleases` | List releases for a Musubi package. |
+| `FindMusubiShortAliasByName` | Resolve a Musubi short alias. |
+
+## Triggers, Contracts, Transactions, and Blocks
+
+| Query | Purpose |
+| --- | --- |
+| `FindActiveTriggerIds` | List active trigger IDs. |
+| `FindTriggers` | List triggers. |
+| `FindTriggerById` | Find one trigger by ID. |
+| `FindContractManifestByCodeHash` | Find a smart-contract manifest by code hash. |
+| `FindTransactions` | List committed transactions. |
+| `FindBlocks` | List blocks. |
+| `FindBlockHeaders` | List block headers. |
+
+## Filtering and Pagination
+
+Iterable queries can expose predicate and selector support. Use query-specific
+typed filters from the SDK so the filter input matches the query output type.
+For large result sets, use query parameters such as cursor and limit instead
+of fetching every row at once.
diff --git a/src/reference/torii-api-console.md b/src/reference/torii-api-console.md
new file mode 100644
index 000000000..7ec6e6751
--- /dev/null
+++ b/src/reference/torii-api-console.md
@@ -0,0 +1,53 @@
+---
+aside: false
+pageClass: torii-api-console-page
+---
+
+# Torii API Console
+
+Use the live OpenAPI document from a running Torii endpoint to inspect routes,
+send test requests, copy curl commands, and generate client code.
+
+
+
+## Requirements
+
+- The Torii endpoint must expose `/openapi.json`.
+- Browser testing requires CORS to allow this docs origin.
+- The browser must be able to reach the endpoint directly.
+- Code generation requires Node.js, pnpm, and a Java runtime for OpenAPI
+ Generator.
+
+The console defaults to `https://taira.sora.org`. Local development usually
+works with `http://127.0.0.1:8080` when you run Torii on your machine.
+
+## Try Taira First
+
+Before generating a client, check that the public OpenAPI document is reachable
+from your machine:
+
+```bash
+curl -fsS https://taira.sora.org/openapi.json -o /tmp/taira-openapi.json
+jq '{title: .info.title, version: .info.version, paths: (.paths | length)}' \
+ /tmp/taira-openapi.json
+```
+
+Then paste `https://taira.sora.org/openapi.json` into the console and try a
+read-only route such as `GET /status`, `GET /v1/domains`, or
+`GET /v1/assets/definitions`. Save signed transaction and private-key flows for
+an SDK or CLI client that loads secrets from your runtime environment.
+
+## Generated Clients
+
+The generator command uses the same live OpenAPI document that the console
+loads. This is useful for JSON operator, explorer, app, and telemetry routes.
+
+For signed ledger transactions, signed queries, and Norito-native payloads,
+prefer the official Iroha SDKs. OpenAPI clients do not assemble signatures,
+manage account keys, or encode Norito transaction bodies for you.
+
+To inspect every generator supported by OpenAPI Generator, run:
+
+```bash
+pnpm dlx @openapitools/openapi-generator-cli list
+```
diff --git a/src/reference/torii-endpoints.md b/src/reference/torii-endpoints.md
new file mode 100644
index 000000000..44766ed0f
--- /dev/null
+++ b/src/reference/torii-endpoints.md
@@ -0,0 +1,415 @@
+# Torii Endpoints
+
+Torii is the HTTP, SSE, and WebSocket gateway for current Iroha deployments.
+In the Iroha 3 track it serves both ledger-facing APIs and a broad set of
+operator endpoints.
+
+The important protocol change from older docs is simple:
+
+- the canonical binary format is **Norito**
+- many endpoints also support JSON when you send `Accept: application/json`
+- metrics are exposed in Prometheus format
+
+For format details, content negotiation, layout flags, schema hashes, and
+Norito RPC guidance, see the [Norito reference](/reference/norito.md).
+
+## Common Endpoints
+
+| Endpoint | Format | Purpose |
+| --- | --- | --- |
+| `POST /transaction` | Norito | Submit a signed transaction |
+| `POST /query` | Norito | Submit a signed query |
+| `GET /events` | WebSocket | Subscribe to event streams |
+| `GET /block/stream` | WebSocket | Stream committed blocks |
+| `GET /peers` | JSON | Peer list exposed by Torii |
+| `GET /health` | JSON | Lightweight liveness endpoint |
+| `GET /api_version` | JSON | Default API version |
+| `GET /status` | JSON | High-level status summary for operators |
+| `GET /metrics` | Prometheus | Prometheus scrape endpoint |
+| `GET /schema` | JSON | Data-model schema snapshot served by the node |
+| `GET /openapi` or `GET /openapi.json` | JSON | OpenAPI document for the active Torii HTTP routes |
+| `GET /v1/parameters` | JSON | Node parameter snapshot |
+| `GET /v1/node/capabilities` | JSON | Node capability and data-model metadata |
+| `GET /v1/api/versions` | JSON | Supported Torii API versions |
+| `GET /v1/events/sse` | SSE | Event stream for long-lived clients |
+| `GET /v1/time/now` | JSON | Node wall-clock snapshot |
+| `GET /v1/time/status` | JSON | Time synchronization status |
+
+`/openapi` is the authoritative endpoint list for a running node. The exact
+surface depends on build features and runtime configuration, so generated
+clients should prefer the live OpenAPI document over a hand-copied route list.
+Use the [Torii API console](/reference/torii-api-console.md) to load that live
+document, test JSON routes, copy curl requests, and generate client code from
+the current schema.
+
+## Try Live Taira Routes
+
+The public Taira testnet exposes the same Torii JSON surface that application
+clients use for read-only exploration. These commands do not require keys:
+
+```bash
+TAIRA_ROOT=https://taira.sora.org
+
+curl -fsS "$TAIRA_ROOT/status" \
+ | jq '{blocks, txs_approved, txs_rejected, queue_size, peers}'
+
+curl -fsS "$TAIRA_ROOT/openapi.json" \
+ | jq -r '.paths | keys[]' \
+ | grep '^/v1/' \
+ | head -n 20
+
+curl -fsS "$TAIRA_ROOT/v1/node/capabilities" \
+ | jq '{abi_version, data_model_version, query: .query.aggregate.supported_resources}'
+```
+
+Try resource reads against the current world state:
+
+```bash
+curl -fsS "$TAIRA_ROOT/v1/domains?limit=5" \
+ | jq -r '.items[].id'
+
+curl -fsS "$TAIRA_ROOT/v1/assets/definitions?limit=5" \
+ | jq -r '.items[] | [.id, .name, .total_quantity] | @tsv'
+```
+
+If a public testnet route returns `502`, times out, or reports a saturated
+queue, treat it as an endpoint availability issue and retry later before
+debugging your client code.
+
+## Consensus and Runtime Endpoints
+
+| Endpoint | Format | Purpose |
+| --- | --- | --- |
+| `GET /v1/sumeragi/commit-certificates` | JSON | Recent commit certificate summaries |
+| `GET /v1/sumeragi/validator-sets` | JSON | Validator set history |
+| `GET /v1/sumeragi/validator-sets/{height}` | JSON | Validator set at a block height |
+| `GET /v1/sumeragi/status` | Norito or JSON | Detailed consensus status snapshot |
+| `GET /v1/sumeragi/status/sse` | SSE | Continuous consensus status stream |
+| `GET /v1/sumeragi/leader` | JSON | Current leader information |
+| `GET /v1/sumeragi/qc` | Norito or JSON | Latest quorum-certificate summary |
+| `GET /v1/sumeragi/checkpoints` | JSON | Consensus checkpoint summary |
+| `GET /v1/sumeragi/consensus-keys` | JSON | Active consensus keys |
+| `GET /v1/sumeragi/bls_keys` | JSON | Active BLS consensus keys |
+| `GET /v1/sumeragi/phases` | JSON | Latest per-phase latency sample |
+| `GET /v1/sumeragi/rbc` | JSON | RBC session and throughput metrics |
+| `GET /v1/sumeragi/rbc/sessions` | JSON | Active RBC session snapshot |
+| `GET /v1/sumeragi/pacemaker` | JSON | Pacemaker status |
+| `GET /v1/sumeragi/params` | JSON | Current on-chain Sumeragi parameters |
+| `GET /v1/sumeragi/collectors` | JSON | Deterministic collector plan snapshot |
+| `GET /v1/sumeragi/key-lifecycle` | JSON | Consensus key lifecycle status |
+| `GET /v1/sumeragi/telemetry` | JSON | Consensus telemetry snapshot |
+| `GET /v1/sumeragi/evidence` | JSON | Evidence records, optionally filtered by query string |
+| `GET /v1/sumeragi/evidence/count` | JSON | Evidence record count |
+| `POST /v1/sumeragi/evidence/submit` | JSON | Submit consensus evidence |
+| `GET /v1/sumeragi/commit_qc/{hash}` | Norito or JSON | Commit QC record for a block hash |
+| `GET /v1/runtime/abi/active` | JSON | Active runtime ABI descriptor |
+| `GET /v1/runtime/abi/hash` | JSON | Active runtime ABI hash |
+| `GET /v1/runtime/metrics` | JSON | Runtime metrics snapshot |
+| `GET /v1/runtime/upgrades` | JSON | Runtime upgrade list |
+| `POST /v1/runtime/upgrades/propose` | JSON | Propose a runtime upgrade |
+| `POST /v1/runtime/upgrades/activate/{id}` | JSON | Activate a proposed runtime upgrade |
+| `POST /v1/runtime/upgrades/cancel/{id}` | JSON | Cancel a proposed runtime upgrade |
+
+## App and SORA Route Families
+
+When Torii is built with the app-facing feature set, it exposes additional JSON
+families for explorers, SORA services, bridge flows, proofs, and storage. These
+families are not all enabled on every network profile.
+
+| Route family | Purpose |
+| --- | --- |
+| `/v1/accounts/*`, `/v1/domains/*`, `/v1/assets/*` | JSON reads, query helpers, onboarding helpers, and portfolio or holder views |
+| `/v1/nfts/*`, `/v1/rwas/*`, `/v1/confidential/*` | NFT, real-world asset, and confidential asset views |
+| `/v1/aliases/*`, `/v1/assets/aliases/*`, `/v1/sns/*`, `/v1/identifiers/*` | Name, alias, and identifier resolution |
+| `/v1/explorer/*` | Explorer-oriented account, asset, block, transaction, instruction, metric, and stream views |
+| `/v1/transactions/*`, `/v1/pipeline/*`, `/v1/iso20022/*` | Transaction history, pipeline recovery or status, and ISO 20022 helpers |
+| `/v1/contracts/*` | Contract code, deploy, bundle, call, view, event, activity, rollup, and state routes |
+| `/v1/multisig/*`, `/v1/controls/*` | Multisig proposals, approvals, and transfer-control helpers |
+| `/v1/bridge/*`, `/v1/ledger/*`, `/v1/proofs/*` | Finality, state proof, block proof, proof retention, and proof query routes |
+| `/v1/da/*` | Data-availability ingest, manifests, proof policies, commitments, and pin intents |
+| `/v1/zk/*` | ZK roots, proof verification, IVM proving, vote tallying, verification keys, proof records, and attachments |
+| `/v1/gov/*`, `/v1/ministry/*` | Governance proposals, ballots, council state, protected namespaces, agenda proposals, enactment, and finalization |
+| `/v1/nexus/*`, `/v1/sccp/*` | Nexus lane, dataspace, and cross-chain proof helpers |
+| `/v1/musubi/*` | Musubi package registry reads and instruction builders |
+| `/v1/subscriptions/*` | Subscription plans, subscription lifecycle, usage, and charging helpers |
+| `/v1/sorafs/*`, `/sorafs/*`, `/.well-known/sorafs/*` | SoraFS provider discovery, capacity proofs, pinning, storage fetches, and public content serving |
+| `/v1/soracloud/*`, `/v1/soradns/*`, `/soradns/*`, `/api/*` | SoraCloud service lifecycle, private compute/model flows, public discovery, and hosted app routing |
+| `/v1/connect/*`, `/v1/vpn/*` | Iroha Connect sessions, WebSocket transport, VPN sessions, profiles, and receipts |
+| `/v1/app-api/*`, `/v1/api/*`, `/v1/content/*` | App API bindings and bundle/CID-backed content routing |
+| `/v1/operator/*`, `/v1/mcp` | Operator authentication and native MCP JSON-RPC bridge |
+| `/v1/offline/*`, `/v1/repo/*`, `/v1/space-directory/*`, `/v1/ram-lfe/*` | Offline readiness, repository agreements, dataspace manifests, and RAM LFE helpers |
+| `/v1/kaigi/*`, `/v1/webhooks/*`, `/v1/notify/*`, `/v1/telemetry/*` | Collaboration, webhook, push notification, and live telemetry integrations |
+
+## ISO 20022 Bridge
+
+Torii exposes the ISO 20022 bridge under `/v1/iso20022/*` when the app-facing
+API and bridge runtime are enabled. The bridge is intentionally scoped: it is
+not a general-purpose ISO 20022 clearing gateway, but a supported subset for
+turning selected payment messages into signed Iroha transfers and for tracking
+their ledger status.
+
+### Torii Ingestion Endpoints
+
+| ISO 20022 message | Endpoint | Purpose |
+| --- | --- | --- |
+| `pacs.008.001.08` (`pacs.008`) | `POST /v1/iso20022/pacs008` | Submit an FI-to-FI customer credit transfer and build the matching Iroha asset transfer |
+| `pacs.009.001.10` (`pacs.009`) | `POST /v1/iso20022/pacs009` | Submit an FI-to-FI credit transfer used for PvP or securities-related cash funding |
+| `pacs.002`-style status | `GET /v1/iso20022/status/{msg_id}` | Read the bridge state for a submitted message, including the derived `pacs002_code`, transaction hash, rejection detail, and resolved ledger context |
+
+`pacs.008` submissions must provide the message ID, interbank settlement
+amount, currency, settlement date, debtor and creditor IBANs, and debtor and
+creditor BICs. When reference data is configured, the bridge also checks the
+BIC, IBAN, and ISO 4217 currency crosswalks before the generated transaction
+enters the pipeline.
+
+`pacs.009` submissions must provide the business message ID, message definition
+ID, creation time, interbank settlement amount, currency, settlement date,
+instructing and instructed agent BICs, and debtor and creditor IBANs. If the
+message includes `Purp`, the bridge currently accepts securities-purpose funding
+only: `Purp=SECU`.
+
+Both ingestion endpoints accept XML ISO envelopes or the flat field format used
+by the bridge tests. Optional `SplmtryData` fields can pin the target Iroha
+ledger, source and target account IDs or addresses, and asset definition ID.
+The response is `202 Accepted` with `message_id`, `transaction_hash`, `status`,
+`pacs002_code`, and the resolved ledger/account/asset context.
+
+### Parser and Mapping Support
+
+The IVM ISO helper also validates and materializes additional message families
+used by bridge tests, settlement mapping, or downstream reconciliation. These
+messages do not have standalone Torii ingestion endpoints unless listed above.
+
+| Message family | Current support |
+| --- | --- |
+| `head.001` | Business application header validation for ISO envelopes, including `BizMsgIdr`, `MsgDefIdr`, creation time, and optional sender/receiver BIC fields |
+| `pacs.002` | Payment status report parsing and status-code vocabulary used by `GET /v1/iso20022/status/{msg_id}` |
+| `pacs.004` | Payment return parsing for return/unwind flows |
+| `pacs.007`, `pacs.028`, `pacs.029` | Payment reversal, status request, and resolution/status scaffolding for investigation flows |
+| `pain.001`, `pain.002` | Customer payment initiation and payment status report validation scaffolding |
+| `camt.052`, `camt.053`, `camt.054`, `camt.056` | Account report, statement, notification, and cancellation-request validation scaffolding |
+| `sese.023`, `sese.025` | Securities settlement instruction and confirmation mapping for DvP/PvP flows |
+| `colr.007` | Collateral substitution confirmation mapping |
+
+Settlement choreography may refer to related market messages such as
+`sese.024`, `sese.030`, `sese.031`, `colr.010`, `colr.011`, `colr.012`, or
+`camt.029`. Treat those as integration-level workflow references until a Torii
+endpoint or IVM schema is added for the specific message.
+
+## Kaigi Sessions
+
+Kaigi provides paid, real-time audio/video rooms on SORA Nexus. Use it when
+an application needs ledger-backed session creation, roster changes, relay
+manifests, encrypted signaling, and usage metering instead of keeping all
+conferencing state off-chain.
+
+The ledger-facing lifecycle is:
+
+- `CreateKaigi`: create a call under a domain and store its policy,
+ schedule, metadata, and optional relay manifest.
+- `JoinKaigi` and `LeaveKaigi`: update the call roster. In private mode,
+ participants use commitments, nullifiers, and roster proofs instead of
+ exposing participant account IDs directly.
+- `RecordKaigiUsage`: append metered duration and gas totals.
+- `EndKaigi`: close the session and record the final timestamp.
+
+Torii exposes relay telemetry under `/v1/kaigi/relays`,
+`/v1/kaigi/relays/{relay_id}`, `/v1/kaigi/relays/health`, and
+`/v1/kaigi/relays/events` when the app API and telemetry features are enabled.
+Session state is reflected through Kaigi domain events such as
+`KaigiRosterSummary`, `KaigiRelayManifestUpdated`,
+`KaigiRelayHealthUpdated`, and `KaigiUsageSummary`.
+
+### CLI Smoke Test
+
+Start with the `iroha kaigi` CLI when you want to verify that a Torii endpoint
+accepts Kaigi transactions before connecting a UI. The quickstart command
+creates a temporary room against the active Torii endpoint and prints a summary
+with the call identifier, join command, and SoraNet spool hint:
+
+```bash
+iroha kaigi quickstart --auto-join-host --summary-out kaigi-summary.json
+```
+
+For scripted flows, manage the room lifecycle explicitly:
+
+```bash
+iroha kaigi create \
+ --domain streaming \
+ --call-name daily \
+ --host \
+ --privacy-mode transparent \
+ --room-policy authenticated
+
+iroha kaigi join --domain streaming --call-name daily --participant
+iroha kaigi leave --domain streaming --call-name daily --participant
+
+iroha kaigi record-usage \
+ --domain streaming \
+ --call-name daily \
+ --duration-ms 120000 \
+ --billed-gas 1500
+
+iroha kaigi end --domain streaming --call-name daily
+```
+
+Use `--room-policy public` for rooms that relays may expose without viewer
+tickets, or `--room-policy authenticated` when exits must require viewer
+authentication. Use `--privacy-mode zk-roster-v1` only after the network has
+the Kaigi roster and usage verifying keys configured; otherwise joins, leaves,
+and private usage records fail during deterministic verification.
+
+### Testing With the JavaScript Demo
+
+Use the
+[soramitsu/iroha-demo-javascript](https://github.com/soramitsu/iroha-demo-javascript)
+desktop demo for an end-to-end wallet test. The demo is an Electron and Vue
+application that talks directly to Torii through the local `@iroha/iroha-js`
+binding and includes a `/kaigi` route for browser-native one-to-one media.
+
+Use the demo with
+[`@iroha/iroha-js`](https://github.com/hyperledger-iroha/iroha/tree/i23-features/javascript/iroha_js)
+from the Iroha `i23-features` branch:
+
+```bash
+git clone https://github.com/soramitsu/iroha-demo-javascript.git
+cd iroha-demo-javascript
+npm install
+npm run dev
+```
+
+Use Node.js 20 or newer and a Rust toolchain so the native `iroha_js_host`
+module can build. If you rebuild or update the SDK manually, refresh the
+native binding:
+
+```bash
+(cd node_modules/@iroha/iroha-js && npm run build:native)
+```
+
+For a controlled test, point the demo at a Kaigi-capable Torii endpoint:
+
+1. Start an Iroha node with the SORA/Kaigi app-facing APIs enabled, or use a
+ public endpoint that exposes the Kaigi surfaces you need.
+2. Check basic reachability with `/health`, then check the live route surface
+ with `/openapi` or `/openapi.json`. Some deployments also expose
+ `/v1/health`, but `/health` is the portable liveness check.
+3. For TAIRA, verify the relay telemetry routes before trying a live meeting:
+
+ ```bash
+ TAIRA=https://taira.sora.org
+ curl -fsS "$TAIRA/health"
+ curl -fsS "$TAIRA/v1/kaigi/relays"
+ curl -fsS "$TAIRA/v1/kaigi/relays/health"
+ ```
+
+ These checks prove that Torii and Kaigi relay telemetry are reachable. They
+ do not create a meeting; `CreateKaigi` and `JoinKaigi` still need funded
+ wallets and signed transaction submission.
+4. Open the demo, go to **Settings**, set the Torii URL, and let the app load
+ the chain ID and network prefix from the endpoint.
+5. Create or restore two local wallets in the demo. Use separate app windows,
+ profiles, or machines so the host and guest have separate wallet state.
+
+To test the Kaigi UI:
+
+1. In the host window, open **Kaigi**, choose **Start meeting**, set a title,
+ and choose **Private invite** or **Transparent invite**.
+2. Select **Turn on camera and mic** so WebRTC has local media.
+3. Select **Create meeting link**. A live wallet submits `CreateKaigi`; the
+ app then shows an `iroha://kaigi/join?call=...&secret=...` invite and a
+ `#/kaigi?...` fallback route.
+4. Keep the host window open and share the invite with the guest.
+5. In the guest window, open the invite or paste it in **Join meeting**, turn
+ on local media, and select **Join meeting**. A live wallet fetches the
+ encrypted host offer from Torii and submits `JoinKaigi` with encrypted
+ answer metadata.
+6. The host should auto-apply the first answer by streaming or polling Kaigi
+ call signals. Both windows should show connected media and updated
+ connection details.
+7. End the session from the host, or use the CLI `iroha kaigi end` command for
+ the same call ID.
+
+Private Kaigi needs shielded XOR to pay the private entrypoint fee. If the
+demo reports that private Kaigi needs shielded XOR, use the in-app
+self-shield prompt and retry the create or join action. If proof generation,
+private funding, or live signaling is unavailable, the demo can fall back to a
+transparent/manual flow. In that case, open **Advanced signaling**, copy the
+raw offer or answer packet, and paste it into the other window.
+
+For automated checks in the demo repo, run:
+
+```bash
+npm test -- tests/kaigiView.spec.ts tests/preloadKaigiBridge.spec.ts
+npm run e2e:ui
+npm run verify
+```
+
+The focused Vitest suites cover Kaigi meeting-link creation, compact invite
+loading, private create/join/end bridge calls, self-shield prompts, manual
+fallbacks, and answer polling. The UI smoke test includes the `/kaigi` route
+on desktop and mobile-sized viewports. Live media between two wallets still
+needs a manual two-window test because browser camera/microphone permissions
+and peer media streams are environment-specific.
+
+For sample integration code, see
+[Embed Kaigi in a JavaScript App](/guide/tutorials/kaigi.md).
+
+## Status and Metrics
+
+The status and metrics endpoints are the first things to wire into dashboards:
+
+- `/status` exposes top-level peer, block, queue, and consensus fields
+- `/metrics` exposes Prometheus counters, gauges, and histograms
+
+On Nexus-enabled nodes, status output also includes lane and data-space-aware
+sections. When `nexus.enabled = false`, those sections are omitted.
+
+## JSON vs. Norito
+
+Several operator endpoints return Norito by default. When the endpoint supports
+JSON, send:
+
+```http
+Accept: application/json
+```
+
+This is especially useful for:
+
+- `/v1/sumeragi/status`
+- `/v1/sumeragi/qc`
+- `/v1/sumeragi/commit_qc/{hash}`
+
+When an endpoint accepts or returns typed Norito directly, use
+`application/x-norito` as the content type or preferred `Accept` value. See
+[Norito](/reference/norito.md#torii-and-norito-rpc) for the transport details.
+
+## Telemetry Profiles
+
+Endpoint visibility depends on telemetry settings. The upstream docs describe
+five profile levels:
+
+| Profile | `/status` | `/metrics` | Developer routes |
+| --- | --- | --- | --- |
+| `disabled` | no | no | no |
+| `operator` | yes | no | no |
+| `extended` | yes | yes | no |
+| `developer` | yes | no | yes |
+| `full` | yes | yes | yes |
+
+## CLI Shortcuts
+
+The `iroha` CLI already wraps many of these endpoints:
+
+```bash
+iroha --config ./localnet/client.toml --output-format text ops sumeragi status
+iroha --config ./localnet/client.toml --output-format text ops sumeragi phases
+iroha --config ./localnet/client.toml ops sumeragi params
+iroha --config ./localnet/client.toml ops sumeragi collectors
+```
+
+## Upstream References
+
+- [README.md API and Observability](https://github.com/hyperledger-iroha/iroha/blob/i23-features/README.md)
+- [docs/source/telemetry.md](https://github.com/hyperledger-iroha/iroha/blob/i23-features/docs/source/telemetry.md)
+- [ISO 20022 bridge implementation](https://github.com/hyperledger-iroha/iroha/blob/i23-features/crates/iroha_torii/src/iso20022_bridge.rs)
+- [Settlement ISO mapping](https://github.com/hyperledger-iroha/iroha/blob/i23-features/docs/portal/docs/finance/settlement-iso-mapping.md)
diff --git a/tsconfig.json b/tsconfig.json
new file mode 100644
index 000000000..9291b9b4e
--- /dev/null
+++ b/tsconfig.json
@@ -0,0 +1,105 @@
+{
+ "compilerOptions": {
+ /* Visit https://aka.ms/tsconfig to read more about this file */
+
+ /* Projects */
+ // "incremental": true, /* Save .tsbuildinfo files to allow for incremental compilation of projects. */
+ // "composite": true, /* Enable constraints that allow a TypeScript project to be used with project references. */
+ // "tsBuildInfoFile": "./.tsbuildinfo", /* Specify the path to .tsbuildinfo incremental compilation file. */
+ // "disableSourceOfProjectReferenceRedirect": true, /* Disable preferring source files instead of declaration files when referencing composite projects. */
+ // "disableSolutionSearching": true, /* Opt a project out of multi-project reference checking when editing. */
+ // "disableReferencedProjectLoad": true, /* Reduce the number of projects loaded automatically by TypeScript. */
+
+ /* Language and Environment */
+ "target": "esnext" /* Set the JavaScript language version for emitted JavaScript and include compatible library declarations. */,
+ // "lib": [], /* Specify a set of bundled library declaration files that describe the target runtime environment. */
+ // "jsx": "preserve", /* Specify what JSX code is generated. */
+ // "experimentalDecorators": true, /* Enable experimental support for TC39 stage 2 draft decorators. */
+ // "emitDecoratorMetadata": true, /* Emit design-type metadata for decorated declarations in source files. */
+ // "jsxFactory": "", /* Specify the JSX factory function used when targeting React JSX emit, e.g. 'React.createElement' or 'h'. */
+ // "jsxFragmentFactory": "", /* Specify the JSX Fragment reference used for fragments when targeting React JSX emit e.g. 'React.Fragment' or 'Fragment'. */
+ // "jsxImportSource": "", /* Specify module specifier used to import the JSX factory functions when using 'jsx: react-jsx*'. */
+ // "reactNamespace": "", /* Specify the object invoked for 'createElement'. This only applies when targeting 'react' JSX emit. */
+ // "noLib": true, /* Disable including any library files, including the default lib.d.ts. */
+ // "useDefineForClassFields": true, /* Emit ECMAScript-standard-compliant class fields. */
+ // "moduleDetection": "auto", /* Control what method is used to detect module-format JS files. */
+
+ /* Modules */
+ "module": "esnext" /* Specify what module code is generated. */,
+ // "rootDir": "./", /* Specify the root folder within your source files. */
+ "moduleResolution": "node" /* Specify how TypeScript looks up a file from a given module specifier. */,
+ // "baseUrl": "./", /* Specify the base directory to resolve non-relative module names. */
+ // "paths": {}, /* Specify a set of entries that re-map imports to additional lookup locations. */
+ // "rootDirs": [], /* Allow multiple folders to be treated as one when resolving modules. */
+ // "typeRoots": [], /* Specify multiple folders that act like './node_modules/@types'. */
+ // "types": [], /* Specify type package names to be included without being referenced in a source file. */
+ // "allowUmdGlobalAccess": true, /* Allow accessing UMD globals from modules. */
+ // "moduleSuffixes": [], /* List of file name suffixes to search when resolving a module. */
+ "resolveJsonModule": true /* Enable importing .json files. */,
+ // "noResolve": true, /* Disallow 'import's, 'require's or ''s from expanding the number of files TypeScript should add to a project. */
+
+ /* JavaScript Support */
+ // "allowJs": true, /* Allow JavaScript files to be a part of your program. Use the 'checkJS' option to get errors from these files. */
+ // "checkJs": true, /* Enable error reporting in type-checked JavaScript files. */
+ // "maxNodeModuleJsDepth": 1, /* Specify the maximum folder depth used for checking JavaScript files from 'node_modules'. Only applicable with 'allowJs'. */
+
+ /* Emit */
+ // "declaration": true, /* Generate .d.ts files from TypeScript and JavaScript files in your project. */
+ // "declarationMap": true, /* Create sourcemaps for d.ts files. */
+ // "emitDeclarationOnly": true, /* Only output d.ts files and not JavaScript files. */
+ // "sourceMap": true, /* Create source map files for emitted JavaScript files. */
+ // "outFile": "./", /* Specify a file that bundles all outputs into one JavaScript file. If 'declaration' is true, also designates a file that bundles all .d.ts output. */
+ // "outDir": "./", /* Specify an output folder for all emitted files. */
+ // "removeComments": true, /* Disable emitting comments. */
+ // "noEmit": true, /* Disable emitting files from a compilation. */
+ // "importHelpers": true, /* Allow importing helper functions from tslib once per project, instead of including them per-file. */
+ // "importsNotUsedAsValues": "remove", /* Specify emit/checking behavior for imports that are only used for types. */
+ // "downlevelIteration": true, /* Emit more compliant, but verbose and less performant JavaScript for iteration. */
+ // "sourceRoot": "", /* Specify the root path for debuggers to find the reference source code. */
+ // "mapRoot": "", /* Specify the location where debugger should locate map files instead of generated locations. */
+ // "inlineSourceMap": true, /* Include sourcemap files inside the emitted JavaScript. */
+ // "inlineSources": true, /* Include source code in the sourcemaps inside the emitted JavaScript. */
+ // "emitBOM": true, /* Emit a UTF-8 Byte Order Mark (BOM) in the beginning of output files. */
+ // "newLine": "crlf", /* Set the newline character for emitting files. */
+ // "stripInternal": true, /* Disable emitting declarations that have '@internal' in their JSDoc comments. */
+ // "noEmitHelpers": true, /* Disable generating custom helper functions like '__extends' in compiled output. */
+ // "noEmitOnError": true, /* Disable emitting files if any type checking errors are reported. */
+ // "preserveConstEnums": true, /* Disable erasing 'const enum' declarations in generated code. */
+ // "declarationDir": "./", /* Specify the output directory for generated declaration files. */
+ // "preserveValueImports": true, /* Preserve unused imported values in the JavaScript output that would otherwise be removed. */
+
+ /* Interop Constraints */
+ // "isolatedModules": true /* Ensure that each file can be safely transpiled without relying on other imports. */,
+ "allowSyntheticDefaultImports": true /* Allow 'import x from y' when a module doesn't have a default export. */,
+ "esModuleInterop": true /* Emit additional JavaScript to ease support for importing CommonJS modules. This enables 'allowSyntheticDefaultImports' for type compatibility. */,
+ // "preserveSymlinks": true, /* Disable resolving symlinks to their realpath. This correlates to the same flag in node. */
+ "forceConsistentCasingInFileNames": true /* Ensure that casing is correct in imports. */,
+
+ /* Type Checking */
+ "strict": true /* Enable all strict type-checking options. */,
+ "noImplicitAny": true /* Enable error reporting for expressions and declarations with an implied 'any' type. */,
+ "strictNullChecks": true /* When type checking, take into account 'null' and 'undefined'. */,
+ "strictFunctionTypes": true /* When assigning functions, check to ensure parameters and the return values are subtype-compatible. */,
+ "strictBindCallApply": true /* Check that the arguments for 'bind', 'call', and 'apply' methods match the original function. */,
+ // "strictPropertyInitialization": true, /* Check for class properties that are declared but not set in the constructor. */
+ // "noImplicitThis": true, /* Enable error reporting when 'this' is given the type 'any'. */
+ "useUnknownInCatchVariables": true /* Default catch clause variables as 'unknown' instead of 'any'. */,
+ // "alwaysStrict": true, /* Ensure 'use strict' is always emitted. */
+ // "noUnusedLocals": true, /* Enable error reporting when local variables aren't read. */
+ // "noUnusedParameters": true, /* Raise an error when a function parameter isn't read. */
+ // "exactOptionalPropertyTypes": true, /* Interpret optional property types as written, rather than adding 'undefined'. */
+ // "noImplicitReturns": true, /* Enable error reporting for codepaths that do not explicitly return in a function. */
+ // "noFallthroughCasesInSwitch": true, /* Enable error reporting for fallthrough cases in switch statements. */
+ // "noUncheckedIndexedAccess": true, /* Add 'undefined' to a type when accessed using an index. */
+ // "noImplicitOverride": true, /* Ensure overriding members in derived classes are marked with an override modifier. */
+ // "noPropertyAccessFromIndexSignature": true, /* Enforces using indexed accessors for keys declared using an indexed type. */
+ // "allowUnusedLabels": true, /* Disable error reporting for unused labels. */
+ // "allowUnreachableCode": true, /* Disable error reporting for unreachable code. */
+
+ /* Completeness */
+ // "skipDefaultLibCheck": true, /* Skip type checking .d.ts files that are included with TypeScript. */
+ "skipLibCheck": true /* Skip type checking all .d.ts files. */
+ },
+ "include": ["."],
+ "exclude": ["src/snippets/**/*", ".vitepress/cache", "dist", "node_modules"]
+}
diff --git a/uno.config.ts b/uno.config.ts
new file mode 100644
index 000000000..532d7f260
--- /dev/null
+++ b/uno.config.ts
@@ -0,0 +1,29 @@
+import { defineConfig } from 'unocss'
+import transformerDirectives from '@unocss/transformer-directives'
+
+export default defineConfig({
+ theme: {
+ colors: {
+ vp: {
+ brand: {
+ 1: 'var(--vp-c-brand-1)',
+ 2: 'var(--vp-c-brand-2)',
+ 3: 'var(--vp-c-brand-3)',
+ soft: 'var(--vp-c-brand-soft)',
+ },
+ bg: {
+ DEFAULT: 'var(--vp-c-bg)',
+ alt: 'var(--vp-c-bg-alt)',
+ elv: 'var(--vp-c-bg-elv)',
+ soft: 'var(--vp-c-bg-soft)',
+ },
+ },
+ },
+
+ boxShadow: {
+ 'elevation-btn':
+ '0 2px 2px 0 rgba(0, 0, 0, 0.14), 0 3px 1px -2px rgba(0, 0, 0, 0.2), 0 1px 5px 0 rgba(0, 0, 0, 0.12)',
+ },
+ },
+ transformers: [transformerDirectives()],
+})
diff --git a/vitest.config.ts b/vitest.config.ts
new file mode 100644
index 000000000..83ed66eea
--- /dev/null
+++ b/vitest.config.ts
@@ -0,0 +1,7 @@
+import { defineConfig } from 'vitest/config'
+
+export default defineConfig({
+ test: {
+ include: ['.vitepress/**/*.spec.ts', 'etc/**/*.spec.ts'],
+ },
+})