Penumbra is a fully private proof-of-stake network and decentralized exchange for the Cosmos ecosystem.

Penumbra integrates privacy with proof-of-stake through a novel private delegation mechanism that provides staking derivatives, tax-efficient staking, and on-chain governance with private voting. Penumbra connects to the Cosmos ecosystem via IBC, acting as an ecosystem-wide shielded pool and allowing private transactions in any IBC-compatible asset. Users can also swap these assets using ZSwap, a private decentralized exchange supporting sealed-bid batch auctions and Uniswap-v3-style concentrated liquidity. Sealed-bid batch auctions prevent frontrunning, provide better execution, and reveal only the net flow across a pair of assets in each block, and liquidity positions are created anonymously, allowing traders to approximate their desired trading function without revealing their individual beliefs about prices.

This website renders the work-in-progress protocol specification for Penumbra.

Press s or use the magnifying glass icon for full-text search.

If you’re interested in technical discussion about the project, why not

## Private Transactions

Penumbra records all value in a single multi-asset shielded pool based on the Zcash Sapling design, but allows private transactions in any kind of IBC asset. Inbound IBC transfers shield value as it moves into the zone, while outbound IBC transfers unshield value.

Unlike Zcash, Penumbra has no notion of transparent transactions or a transparent value pool; instead, inbound IBC transfers are analogous to t2z Zcash transactions, outbound IBC transfers are analogous to z2t Zcash transactions, and the entire Cosmos ecosystem functions analogously to Zcash’s transparent pool.

Unlike the Cosmos Hub or many other chains built on the Cosmos SDK, Penumbra has no notion of accounts. Only validators have any kind of long-term identity, and this identity is only used (in the context of transactions) for spending the validator’s commission.

## Private Staking

In a proof-of-stake system like the Cosmos Hub, stakeholders delegate staking tokens by bonding them to validators. Validators participate in Tendermint consensus with voting power determined by their delegation size, and delegators receive staking rewards in exchange for taking on the risk of being penalized for validator misbehavior (slashing).

Integrating privacy and proof of stake poses significant challenges. If delegations are public, holders of the staking token must choose between privacy on the one hand and staking rewards and participation in consensus on the other hand. Because the majority of the stake will be bonded to validators, privacy becomes an uncommon, opt-in case. But if delegations are private, issuing staking rewards becomes very difficult, because the chain no longer knows the amount and duration of each address’ delegations.

Penumbra sidesteps this problem using a new mechanism that eliminates staking rewards entirely, treating unbonded and bonded stake as separate assets, with an epoch-varying exchange rate that prices in what would be a staking reward in other systems. This mechanism ensures that all delegations to a particular validator are fungible, and can be represented by a single delegation token representing a share of that validator’s delegation pool, in effect a first-class staking derivative. Although delegation fungibility is key to enabling privacy, as a side effect, delegators do not realize any income while their stake is bonded, only a capital gain (or loss) on unbonding.

The total amount of stake bonded to each validator is part of the public chain state and determines consensus weight, but the delegation tokens themselves are each just another token to be recorded in a multi-asset shielded pool. This provides accountability for validators and privacy and flexibility for delegators, who can trade and transact with their delegation tokens just like they can with any other token.

It also provides an alternate perspective on the debate between fixed-supply and inflation-based rewards. Choosing the unbonded token as the numéraire, delegators are rewarded by inflation for taking on the risk of validator misbehavior, and the token supply grows over time. Choosing (a basket of) delegation tokens as the numéraire, non-delegators are punished by depreciation for not taking on any risk of misbehavior, and the token supply is fixed.

## Private Governance

Like the Cosmos Hub, Penumbra supports on-chain governance with delegated voting. Unlike the Cosmos Hub, Penumbra’s governance mechanism supports secret ballots. Penumbra users can anonymously propose votes by escrowing a deposit of bonded stake. Stakeholders vote by proving ownership of their bonded stake prior to the beginning of the voting period and encrypting their votes to a threshold key controlled by the validator set. Validators sum encrypted votes and decrypt only the per-epoch totals.

## Private DEX

Penumbra provides private, sealed-bid batch swaps using ZSwap. ZSwap allows users to privately swap between any pair of assets. Individual swaps do not reveal trade amounts. Instead, all swaps in each block are executed in a single batch. Only the total amount in each batch is revealed, and only after the batch has been finalized. This prevents front-running and provides better execution, but also provides long-term privacy for individual swaps. Users can also provide liquidity by anonymously creating Uniswap-v3-style concentrated liquidity positions. These positions reveal the amount of liquidity and the bounds in which it is concentrated, but are not otherwise linked to any identity, so that (with some care) users can privately approximate arbitrary trading functions without revealing their specific views about prices.

# Concepts and Mechanisms

This section provides an overview of the concepts involved in Penumbra’s mechanism design.

# Batching Flows

Penumbra’s ledger records value as it moves between different economic roles – for instance, movement between unbonded stake and delegation tokens, movement between different assets as they are traded, etc. This creates a tension between the need to reveal the total amount of value in each role as part of the public chain state, and the desire to shield value amounts in individual transactions.

To address this tension, Penumbra provides a mechanism to aggregate value flows across a batch of transactions, revealing the only the total amount and not the value contributed by each individual transaction. This mechanism is built using an integer-valued homomorphic encryption scheme that supports threshold decryption, so that the network’s validators can jointly control a decryption key.

Transactions that contribute value to a batch contain an encryption of the amount. To flush the batch, the validators sum the ciphertexts from all relevant transactions to compute an encryption of the batch total, then jointly decrypt it and commit it to the chain.

This mechanism doesn’t require any coordination between the users whose transactions are batched, but it does require that the validators create and publish a threshold decryption key. To allow batching across block boundaries, Penumbra organizes blocks into epochs, and applies changes to the validator set only at epoch boundaries. Decryption keys live for the duration of the epoch, allowing value flows to be batched over any time interval from 1 block up to the length of an epoch. We propose epoch boundaries on the order of 1-3 days.

At the beginning of each epoch, the validator set performs distributed key generation for to produce a decryption key jointly controlled by the validators (on an approximately stake-weighted basis) and includes the encryption key in the first block of the epoch.

Because this key is only available after the first block of each epoch, some transactions cannot occur in the first block itself. Assuming a block interval similar to the Cosmos Hub, this implies an ~8-second processing delay once per day, a reasonable tradeoff against the complexity of phased setup procedures.

Value transferred on Penumbra is sent to shielded payment addresses; these addresses are derived from spending keys through a sequence of intermediate keys that represent different levels of attenuated capability:

flowchart BT
end;
subgraph DTK[Detection Key]
end;
subgraph IVK[Incoming Viewing Key]
end;
subgraph OVK[Outgoing Viewing Key]
end;
subgraph FVK[Full Viewing Key]
end;
subgraph SK[Spending Key]
end;

index(Diversifier Index);
div{ };

SK --> FVK;
FVK --> OVK;
FVK --> IVK;

index --> div;
IVK ----> div;


From bottom to top:

• the spending key is the root capability, representing spending authority;
• the full viewing key represents the capability to view all transactions related to the spending authority;
• the outgoing viewing key represents the capability to view only outgoing transactions, and is used to recover information about previously sent transactions;
• the incoming viewing key represents the capability to view only incoming transactions, and is used to scan the block chain for incoming transactions.

Penumbra allows the same spending authority to present multiple, publicly unlinkable addresses, keyed by an 11-byte diversifier index. Each choice of diversifier index gives a distinct shielded payment address. Because these addresses share a common incoming viewing key, the cost of scanning the blockchain does not increase with the number of addresses in use.

Finally, Penumbra also allows outsourcing probabilistic transaction detection to third parties using fuzzy message detection. Each address has a detection key; a third party can use this key to detect transactions that might be relevant to that key. Like a Bloom filter, this detection has false positives but no false negatives, so detection will find all relevant transactions, as well as some amount of unrelated cover traffic. Unlike incoming viewing keys, detection keys are not shared between diversified addresses, allowing fine-grained control of delegation.

This diagram shows only the user-visible parts of the key hierarchy. Internally, each of these keys has different components, described in detail in the Addresses and Keys section of the Cryptographic Protocol chapter.

# Assets and Amounts

Penumbra’s shielded pool can record arbitrary assets. To be precise, we define:

• an amount to be an untyped quantity of some asset;
• an asset type to be an ADR001-style denomination trace uniquely identifying a cross-chain asset and its provenance, such as:
• denom (native chain A asset)
• transfer/channelToA/denom (chain B representation of chain A asset)
• transfer/channelToB/transfer/channelToA/denom (chain C representation of chain B representation of chain A asset)
• an asset ID to be a fixed-size hash of an asset type;
• a value to be a typed quantity, i.e., an amount and an asset id.

Penumbra deviates slightly from ADR001 in the definition of the asset ID. While ADR001 defines the IBC asset ID as the SHA-256 hash of the denomination trace, Penumbra hashes to a field element, so that asset IDs can be more easily used inside of a zk-SNARK circuit.

# Notes, Nullifiers, and Trees

Transparent blockchains operate as follows: all participants maintain a copy of the (consensus-determined) application state. Transactions modify the application state directly, and participants check that the state changes are allowed by the application rules before coming to consensus on them.

On a shielded blockchain like Penumbra, however, the state is fragmented across all users of the application, as each user has a view only of their “local” portion of the application state recording their funds. Transactions update a user’s state fragments privately, and use a zero-knowledge proof to prove to all other participants that the update was allowed by the application rules.

Penumbra’s transaction model is derived from the Zcash shielded transaction design, with modifications to support multiple asset types, and several extensions to support additional functionality. The Zcash model is in turn derived from Bitcoin’s unspent transaction output (UTXO) model, in which value is recorded in transaction outputs that record the conditions under which they can be spent.

In Penumbra, value is recorded in notes, which function similarly to UTXOs. Each note specifies (either directly or indirectly) a type of value, an amount of value of that type, a spending key that authorizes spending the note’s value, and a unique nullifier derived from the note’s contents.

However, unlike UTXOs, notes are not recorded as part of the public chain state. Instead, the chain contains a note commitment tree, an incremental Merkle tree containing (public) commitments to (private) notes. Creating a note involves creating a new note commitment, and proving that it commits to a valid note. Spending a note involves proving that the spent note was previously included in the note commitment tree, using the spending key to demonstrate spend authorization, and revealing the nullifier, which prevents the same note from being spent twice.

# Transactions

Transactions describe an atomic collection of changes to the ledger state. Each transaction consist of a sequence of descriptions for various actions1. Each description adds or subtracts (typed) value from the transaction’s value balance, which must net to zero. Penumbra adapts the Spend and Output actions from Sapling, and adds many new descriptions to support additional functionality:

Penumbra adapts Sapling’s Spend, which spends a note and adds to the transaction’s value balance, and Output, which creates a new note and subtracts from the transaction’s value balance, and adds many new descriptions to support additional functionality:

#### Transfers

• Spend descriptions spend an existing note, adding its value to the transaction’s value balance;

• Output descriptions create a new note, subtracting its value from the transaction’s value balance;

• Transfer descriptions transfer value out of Penumbra by IBC, consuming value from the transaction’s value balance, and producing an ICS20 FungibleTokenPacketData for the counterparty chain;

#### Governance

• CreateProposal descriptions are used to propose measures for on-chain governance and supply a deposit, consuming bonded stake from the transaction’s value balance and producing a new note that holds the deposit in escrow;

• WithdrawProposal descriptions redeem an escrowed proposal deposit, returning it to the transaction’s value balance and immediately withdrawing the proposal.

• Vote descriptions perform private voting for on-chain governance and declare a vote. This description leaves the value balance unchanged.

• Swap descriptions perform the first phase of ZSwap, consuming tokens of one type from a transaction’s value balance, burning them, and producing a swap commitment for use in the second stage;

• Sweep descriptions perform the second phase of ZSwap, allowing a user who burned tokens of one type to mint tokens of the other type at the chain-specified clearing price, and adding the new tokens to a transaction’s value balance;

#### Market-making

• OpenPosition descriptions open concentrated liquidity positions, consuming value of the traded types from the transaction’s value balance and adding the specified position to the AMM state;

• ClosePosition descriptions close concentrated liquidity positions, removing the specified position to the AMM state and adding the value of the position, plus any accumulated fees or liquidity rewards, to the transaction’s value balance.

Each transaction also contains a fee specification, which is always transparently encoded. The value balance of all of a transactions actions, together with the fees, must net to zero.

1

Note that like Zcash Orchard, we use the term “action” to refer to one of a number of possible state updates; unlike Orchard, we do not attempt to conceal which types of state updates are performed, so our Action is an enum.

# Governance

Penumbra supports on-chain governance with delegated voting. Validators’ votes are public and act as default votes for their entire delegation pool, while delegators’ votes are private, and override the default vote provided by their validator.

Votes are the same as on the Cosmos Hub: Yes, No, NoWithVeto, and Abstain. NoWithVeto is the same as No but also votes that the proposer should lose their deposit. The intended cultural norm is that No should be used to indicate disagreement with good-faith proposals and NoWithVeto should be used to deter spam proposals.

## Proposals

Penumbra users can propose votes by escrowing a minimum amount of PEN. They do this by creating a transaction with a CreateProposal description, which consumes some amount of PEN from the transaction’s balance, and creates a new escrow note with the same amount. The note is escrowed in the sense that it is recorded seperately and is not included in the note commitment tree until voting completes.

Proposals can either be normal or emergency proposals. In either case, the voting period begins immediately, in the next block after the proposal has been committed to the chain. Normal proposals have a fixed-length voting period, while emergency proposals are accepted as soon as a 2/3 majority of the stake is reached.

Because validators provide default votes for their delegation pool, an emergency proposal can in principle be accepted immediately, without any input from delegators. This allows time-critical resolution of emergencies (e.g., deploying an 0day hotfix); the 2/3 majority of the stake required is already sufficient to arbitrarily rewrite the chain state.

Proposals can also be withdrawn by their proposer prior to the end of the voting period. This is done by creating a transaction with a WithdrawProposal description, and allows the community to iterate on proposals as the (social) governance process occurs. For instance, a chain upgrade proposal can be withdrawn and re-proposed with a different source hash if a bug is discovered while upgrade voting is underway. Withdrawn proposals cannot be accepted, even if the vote would have passed, but they can be vetoed.1

## Voting

Stakeholder votes are of the form , representing the weights for yes, no, abstain, and veto respectively. Most stakeholders would presumably set all but one weight to . Stakeholders vote by proving ownership of some amount of bonded stake (their voting power) prior to the beginning of the voting period.

To do this, they create a transaction with a Vote description. This description identifies the validator and the proposal, proves spend authority over a note recording dPEN(v), and reveals the note’s nullifier. Finally, it proves vote consistency , produces a new note with dPEN(v), and includes , an encryption of the vote weights to the validators’ decryption key.

The proof statements in a Vote description establishing spend authority over the note are almost identical to those in a Spend description. However, there are two key differences. First, rather than proving that the note was included in a recent note commitment tree state, it always uses the root of the note commitment tree at the time that voting began, establishing that the note was not created after voting began. Second, rather than checking the note’s nullifier against the global nullifier set and marking it as spent, the nullifier is checked against a snapshot of the nullifier set at the time that voting began (establishing that it was unspent then), as well as against a per-proposal nullifier set (establishing that it has not already been used for voting). In other words, instead of marking that the note has been spent in general, we only mark it as having been spent in the context of voting on a specific proposal.

This change allows multiple proposals to be voted on concurrently, at the cost of linkability. While the same note can be used to vote on multiple proposals, those votes, as well as the subsequent spend of the note, will have the same nullifier and thus be linkable to each other. However, the Vote descriptions are shielded, so an observer only learns that two opaque votes were related to each other.

We suggest that wallets roll over the note the first time it is used for voting by creating a transaction with Vote, Spend, and Output descriptions. This mitigates linkability between Vote and Spend descriptions, and means that votes on any proposals created after the first vote are unlinkable from prior votes.

At the end of each epoch, validators collect the encrypted votes from each delegation pool, aggregate the encrypted votes into encrypted tallies and decrypt the tallies. These intermediate tallies are revealed, because it is not possible to batch value flows over time intervals longer than one epoch. In practice, this provides a similar dynamic as existing (transparent) on-chain governance schemes, where tallies are public while voting is ongoing.

At the end of the voting period, the per-epoch tallies are summed. For each validator , the votes for each option are summed to determine the portion of the delegation pool that voted; the validator’s vote acts as the default vote for the rest of the delegation pool. Finally, these per-validator subtotals are multiplied by the voting power adjustment function to obtain the final vote totals.

If the vote was not vetoed, the escrowed note from the Proposal description is included in the note commitment tree, so that it can be spent by the proposer. Otherwise, it is not, and the funds are burned.

1

If withdrawing a proposal halted on-chain voting immediately, the escrow mechanism would not be effective at deterring spam, since the proposer could yank their proposal at the last minute prior to losing their deposit. However, at the UX level, withdrawn proposals can be presented as though voting were closed, since validators’ default votes are probably sufficient for spam deterrence.

# Staking and Delegation

As described in the overview, integrating privacy with proof-of-stake poses significant challenges. Penumbra sidesteps these challenges using a novel mechanism that eliminates staking rewards entirely, treating unbonded and bonded stake as separate assets, with an epoch-varying exchange rate that prices in what would be a staking reward in other systems. This mechanism ensures that all delegations to a particular validator are fungible, and can be represented by a single delegation token representing a share of that validator’s delegation pool, in effect a first-class staking derivative.

This section describes the staking and delegation mechanism in detail:

# Staking Tokens

Penumbra’s staking token, denoted PEN, represents units of unbonded stake.

Rather than treat bonded and unbonded stake as being two states of the same token, Penumbra records stake bonded with a particular validator with a delegation token, denoted dPEN. Conversion between PEN and dPEN occurs with an epoch-varying exchange rate that prices in what would be a staking reward in other systems. This ensures that all delegations to a particular validator are fungible, and can be represented by a single token representing an ownership share of that validator’s delegation pool.

Stake bonded with different validators is not fungible, as different validators may have different commission rates and different risk of misbehavior. Hence dPEN is a shorthand for a class of assets (one per validator), rather than a single asset. dPEN bonded to a specific validator can be denoted dPEN(v) when it is necessary to be precise.

Each flavor of dPEN is its own first-class token, and like any other token can be transferred between addresses, traded, sent over IBC, etc. Penumbra itself does not attempt to pool risk across validators, but nothing prevents third parties from building stake pools composed of these assets.

The base reward rate for bonded stake is a parameter indexed by epoch. This parameter can be thought of as a “Layer 1 Base Operating Rate”, or “L1BOR”, in that it serves as a reference rate for the entire chain. Its value is set on a per-epoch basis by a formula involving the ratio of bonded and unbonded stake, increasing when there is relatively less bonded stake and decreasing when there is relatively more. This formula should be decided and adjusted by governance.

Each validator declares a commission percentage , also indexed by epoch, which is subtracted from the base reward rate to get a validator-specific reward rate

The base exchange rate between PEN and dPEN is given by the function which measures the cumulative depreciation of stake PEN relative to the delegation token dPEN from genesis up to epoch . However, because dPEN is not a single asset but a family of per-validator assets, this is only a base rate.

The actual exchange rate between stake PEN and validator ’s delegation token dPEN(v) accounts for commissions by substituting the validator-specific rate in place of the base rate to get

Delegating PEN to validator at epoch results in dPEN. Undelegating dPEN(v) from validator at epoch results in PEN. Thus, delegating at epoch and undelegating at epoch results in a return of i.e., the staking reward compounded only over the period during which the stake was bonded.

Discounting newly bonded stake by the cumulative depreciation of unbonded stake since genesis means that all bonded stake can be treated as if it had been bonded since genesis, which allows newly unbonded stake to always be inflated by the cumulative appreciation since genesis. This mechanism avoids the need to track the age of any particular delegation to compute its rewards, and makes all shares of each validator’s delegation pool fungible.

# Validator Rewards and Fees

Validators declare a commission percentage , which determines the spread between the base reward rate and the reward rate for their delegators , or equivalently .

Validator rewards are distributed in the first block of each epoch. In epoch , a validator whose delegation pool has size dPEN receives a commission of size PEN, issued to the validator’s address.

To see why this is the reward amount, suppose validator has a delegation pool of size dPEN. In epoch , the value of the pool is PEN. In epoch , the base reward rate causes the value of the pool to increase to

Splitting as , this becomes

The value in the first term, , corresponds to the portion, and accrues to the delegators. Since , this is exactly , the new PEN-denominated value of the delegation pool.

The value in the second term, , corresponds to the portion, and accrues to the validator as commission. Validators can self-delegate the resulting PEN or use it to fund their operating expenses.

Transaction fees are denominated in PEN and are burned, so that the value of the fees accrues equally to all stake holders.

## TODO

• allow transaction fees in dPEN with appropriate price adjustment, but only in transactions (e.g., undelegations, voting) that otherwise reveal the flavor of dPEN, so that there is no additional distinguisher?

# Voting Power

The size of each validator’s delegation pool (i.e., the amount of dPEN of that validator’s flavor) is public information, and determines the validator’s voting power. However, these sizes cannot be used directly, because they are based on validator-specific conversion rates and are therefore incommensurate.

Voting power is calculated using the adjustment function , so that a validator whose delegation pool has

dPEN in epoch has voting power .

The validator-specific reward rate adjusts the base reward rate to account for the validator’s commission. Since and the adjustment function accounts for the compounded effect of the validator’s commission on the size of the delegation pool.

# Delegation

The delegation process bonds stake to a validator, converting stake PEN to delegation tokens dPEN. Delegations may be performed in any block, but only take effect in the next epoch.

Delegations are accomplished by creating a transaction with a Delegate description. This specifies a validator , consumes PEN from the transaction’s balance, produces a new shielded note with dPEN associated to that validator, and includes an encryption of the delegation amount to the validators’ shared decryption key . Here is the index of the next epoch, when the delegation will take effect.

In the last block of epoch , the validators sum the encrypted bonding amounts from all delegate descriptions for validator in the entire epoch to obtain an encryption of the total delegation amount and decrypt to obtain the total delegation amount without revealing any individual transaction’s delegation amount . These total amounts are used to update the size of each validator’s delegation pool for the next epoch.

Revealing only the total inflows to the delegation pool in each epoch helps avoid linkability. For instance, if the size of each individual transaction’s delegation were revealed, a delegation of size followed by an undelegation of size could be correlated if an observer notices that there are some epochs so that

This risk is still present when only the total amount – the minimum disclosure required for consensus – is revealed, because there may be few (or no) other delegations to the same validator in the same epoch. Some care should be taken in client implementations and user behavior to mitigate the effects of this information disclosure, e.g., by splitting delegations into multiple transactions in different epochs involving randomized sub-portions of the stake. However, the best mitigation would simply be to have many users.

# Undelegation

The undelegation process unbonds stake from a validator, converting delegation tokens dPEN to stake PEN. Undelegations may be performed in any block, but only settle after the undelegation has exited the unbonding queue.

The unbonding queue is a FIFO queue allowing only a limited amount of stake to be unbonded in each epoch, according to an unbonding rate selected by governance. Undelegations are inserted into the unbonding queue in FIFO order. Unlike delegations, where only the total amount of newly bonded stake is revealed, undelegations reveal the precise amount of newly unbonded stake, allowing the unbonding queue to function.

Undelegations are accomplished by creating a transaction with a Undelegate description. This description has different behaviour depending on whether or not the validator was slashed.

In the unslashed case, the undelegate description spends a note with value dPEN, reveals , and produces PEN for the transaction’s balance, where is the index of the current epoch. However, the nullifiers revealed by undelegate descriptions are not immediately included in the nullifier set, and new notes created by a transaction containing an undelegate description are not immediately included in the note commitment tree. Instead, the transaction is placed into the unbonding queue to be applied later. In the first block of each epoch, transactions are applied if the corresponding validator remains unslashed, until the unbonding limit is reached.

If a validator is slashed, any undelegate transactions currently in the unbonding queue are discarded. Because the nullifiers for the notes those transactions spent were not included in the nullifier set, the notes remain spendable, allowing a user to create a new undelegation description.

Undelegations from a slashed validator are settled immediately. The undelegate description spends a note with value dPEN and produces PEN, where is the slashing penalty and is the epoch at which the validator was slashed. The remaining value, , is burned.

Because pending undelegations from a slashed validator are discarded without applying their nullifiers, those notes can be spent again in a post-slashing undelegation description. This causes linkability between the discarded undelegations and the post-slashing undelegations, but this is not a concern because slashing is a rare and unplanned event which already imposes worse losses on delegators.

# Example Staking Dynamics

To illustrate the dynamics of this system, consider a toy scenario with three delegators, Alice, Bob, and Charlie, and two validators, Victoria and William. Tendermint consensus requires at least four validators and no one party controlling more than of the stake, but this example uses only a few parties just to illustrate the dynamics.

For simplicity, the the base reward rates and commission rates are fixed over all epochs at and , . The PEN and dPEN holdings of participant are denoted by , , etc., respectively.

Alice starts with dPEN of Victoria’s delegation pool, Bob starts with dPEN of William’s delegation pool, and Charlie starts with unbonded PEN.

• At genesis, Alice, Bob, and Charlie respectively have fractions , , and of the total stake, and fractions , , of the total voting power.

• At epoch , Alice, Bob, and Charlie’s holdings remain unchanged, but their unrealized notional values have changed.

• Victoria charges zero commission, so . Alice’s dPEN(v) is now worth PEN.
• William charges commission, so . Bob’s dPEN(w) is now worth , and William receives PEN.
• William can use the commission to cover expenses, or self-delegate. In this example, we assume that validators self-delegate their entire commission, to illustrate the staking dynamics.
• William self-delegates PEN, to get dPEN in the next epoch, epoch .
• At epoch :

• Alice’s dPEN(v) is now worth PEN.
• Bob’s dPEN(w) is now worth PEN.
• William’s self-delegation of accumulated commission has resulted in dPEN(w).
• Victoria’s delegation pool remains at size dPEN(v). William’s delegation pool has increased to dPEN(w). However, their respective adjustment factors are now and , so the voting powers of their delegation pools are respectively and .
• The slight loss of voting power for William’s delegation pool occurs because William self-delegates rewards with a one epoch delay, thus missing one epoch of compounding.
• Charlie’s unbonded PEN remains unchanged, but its value relative to Alice and Bob’s bonded stake has declined.
• William’s commission transfers stake from Bob, whose voting power has slightly declined relative to Alice’s.
• The distribution of stake between Alice, Bob, Charlie, and William is now , , , respectively. The distribution of voting power is , , , respectively.
• Charlie decides to bond his stake, split evenly between Victoria and William, to get dPEN(v) and dPEN(w).
• At epoch :

• Charlie now has dPEN(v) and dPEN(w), worth PEN.
• For the same amount of unbonded stake, Charlie gets more dPEN(w) than dPEN(v), because the exchange rate prices in the cumulative effect of commission since genesis, but Charlie isn’t charged for commission during the time he didn’t delegate to William.
• William’s commission for this epoch is now PEN, up from PEN in the previous epoch.
• The distribution of stake between Alice, Bob, Charlie, and William is now , , , respectively. Because all stake is now bonded, except William’s commission for this epoch, which is insignificant, the distribution of voting power is identical to the distribution of stake.
• At epoch :

• Alice’s dPEN(v) is now worth PEN.
• Bob’s dPEN(w) is now worth PEN.
• Charlies’s dPEN(v) is now worth PEN, and his dPEN(w) is now worth PEN.
• William’s self-delegation of accumulated commission has resulted in dPEN(w), worth PEN.
• The distribution of stake and voting power between Alice, Bob, Charlie, and William is now , , , respectively.

This scenario was generated with a model in this Google Sheet.

# Transfers into Penumbra

IBC transfer mechanics are specified in ICS20. The FungibleTokenPacketData packet describes the transfer:

FungibleTokenPacketData {
denomination: string,
amount: uint256,
sender: string,
}


The sender and receiver fields are used to specify the sending account on the source chain and the receiving account on the destination chain. However, for inbound transfers, the destination chain is Penumbra, which has no accounts. Instead, token transfers into Penumbra create an OutputDescription describing a new shielded note with the given amount and denomination, and insert an encoding of the description itself into the receiver field.

# ZSwap

Penumbra provides private, sealed-bid batch swaps using ZSwap. ZSwap allows users to privately swap between any pair of assets. Individual swaps do not reveal trade amounts. Instead, all swaps in each block are executed in a single batch. Only the total amount in each batch is revealed, and only after the batch has been finalized. This prevents front-running and provides better execution, but also provides long-term privacy for individual swaps. Users can also provide liquidity by anonymously creating Uniswap-v3-style concentrated liquidity positions. These positions reveal the amount of liquidity and the bounds in which it is concentrated, but are not otherwise linked to any identity, so that (with some care) users can privately approximate arbitrary trading functions without revealing their specific views about prices.

## Frequent batch auctions as a market design response

Budish, Cramton, and Shim (2015) analyze trading in traditional financial markets using the predominant continuous-time limit order book market design, and find that high-frequency trading arises as a response to mechanical arbitrage opportunities created by flawed market design:

These findings suggest that while there is an arms race in speed, the arms race does not actually affect the size of the arbitrage prize; rather, it just continually raises the bar for how fast one has to be to capture a piece of the prize… Overall, our analysis suggests that the mechanical arbitrage opportunities and resulting arms race should be thought of as a constant of the market design, rather than as an inefficiency that is competed away over time.

— Eric Budish, Peter Cramton, John Shim, The High-Frequency Trading Arms Race: Frequent Batch Auctions as a Market Design Response

Because these mechanical arbitrage opportunities arise from the market design even in the presence of symmetrically observed public information, they do not improve prices or produce value, but create arbitrage rents that increase the cost of liquidity provision1. Instead, they suggest changing from a continuous-time model to a discrete-time model and performing frequent batch auctions, executing all orders that arrive in the same discrete time step in a single batch with a uniform price.

In the blockchain context, while projects like Uniswap have demonstrated the power and success of CFMMs for decentralized liquidity provision, they have also highlighted the mechanical arbitrage opportunities created by the mismatch between continuous-time market designs and the state update model of the underlying blockchain, which operates in discrete batches (blocks). Each prospective state update is broadcast to all participants to be queued in the mempool, but only committed as part of the next block, and while it is queued for inclusion in the consensus state, other participants can bid to manipulate its ordering relative to other state updates (for instance, front-running a trade).

This manipulation is enabled by two features:

• Although trades are always committed in a batch (a block), they are performed individually, making them dependent on miners’ choices of the ordering of trades within a block;

• Because trades disclose the trade amount in advance of execution, all other participants have the information necessary to manipulate them.

ZSwap addresses the first problem by executing all swaps in each block in a single batch, first aggregating the amounts in each swap and then executing it against the CFMM as a single trade.

ZSwap addresses the second problem by having users encrypt their swap amounts using a homomorphic threshold decryption key controlled by the validators, who aggregate the encrypted amounts and decrypt only the batch trade. This prevents front-running prior to block inclusion, and provides privacy for individual trades (up to the size of the batch) afterwards.

Users do not experience additional trading latency from the batch auction design, because the batch auctions occur in every block, which is already the minimum latency for finalized state updates. A longer batch latency could help privacy for market-takers by increasing the number of swaps in each batch, but would impair other trading and impose a worse user experience.

## Private, sealed-bid batch auctions

A key challenge in the design of any private swap mechanism is that zero-knowledge proofs only allow privacy for user-specific state, not for global state, because they don’t let you prove statements about things that you don’t know. While users can prove that their user-specific state was updated correctly without revealing it, they cannot do so for other users’ state.

Instead of solving this problem, ZSwap sidesteps the need for users to do so. At a high level, swaps work as follows: users privately burn funds of one kind in a coordinated way that allows the chain to compute a per-block clearing price, and mint or burn liquidity pool reserves. Later, users privately mint funds of the other kind, proving that they previously burned funds and that the minted amount is consistent with the computed price and the burned amount. No interaction or transfer of funds between users or the liquidity pool reserves is required. Users do not transact with each other. Instead, the chain permits them to transmute one asset type to another, provably updating their private state without interacting with any other users’ private state.

This mechanism is described in more detail in the Sealed-Bid Batch Auctions section.

## Concentrated Liquidity

ZSwap uses the concentrated liquidity mechanism introduced in Uniswap v3 to make liquidity provision more capital-efficient. Uniswap v3’s insight is that that existing constant-product market makers like Uniswap v2 allocate liquidity inefficiently, spreading it uniformly over the entire range of possible prices for a trading pair. Instead, allowing liquidity providers (LPs) to restrict their liquidity to a price range of their choosing provides a mechanism for market allocation of liquidity, concentrating it into the range of prices that the assets in the pair actually trade.

Liquidity providers create positions that tie a quantity of liquidity to a specific price range. Within that price range, the position acts as a constant-product market maker with larger “virtual” reserves. At each price, the pool aggregates liquidity from all positions that contain that price, and tracks which positions remain (or become) active as the price moves. By creating multiple positions, LPs can approximate arbitrary trading functions.

In Uniswap v3, all positions are public and tied to the identity of the LP who created them. In ZSwap, positions are also public, but they are opened and closed anonymously. This means that while the aggregate of all LPs’ trading functions is public, the trading function of each individual LP is not, so LPs can approximate their desired trading functions without revealing their specific views about prices, as long as they take care to avoid linking their positions (e.g., by timing or amount).

Handling of concentrated liquidity is described in more detail in the Concentrated Liquidity section.

## Liquidity Mining and Fees

ZSwap supports liquidity mining, described in more detail in the Liquidity Mining section.

Because ZSwap positions are created anonymously, fees and liquidity mining rewards cannot be paid out to the position’s owner, as in Uniswap v3. Instead, fees and rewards accrue to the position, and are claimed when the position is closed. This process is described in the Closing Positions section.

1

on HFT, N.B. their footnote 5:

A point of clarification: our claim is not that markets are less liquid today than before the rise of electronic trading and HFT; the empirical record is clear that trading costs are lower today than in the pre-HFT era, though most of the benefits appear to have been realized in the late 1990s and early 2000s… Rather, our claim is that markets are less liquid today than they would be under an alternative market design that eliminated sniping.

# Sealed-Bid Batch Auctions

ZSwap’s sealed-bid batch auctions conceptually decompose into two parts: the market-making function itself, whose design is inspired by Uniswap v3, and the batching procedure, which allows multiple users’ swaps to be batched into a single trade executed against the trading function. This section focuses on the batching procedure, leaving the mechanics of the trading function for later.

A key challenge in the design of any private swap mechanism is that zero-knowledge proofs only allow privacy for user-specific state, not for global state, because [they don’t let you prove statements about things that you don’t know][barrywhitehat-private-uniswap]. While users can prove that their user-specific state was updated correctly without revealing it, they cannot do so for other users’ state.

Instead of solving this problem, ZSwap sidesteps the need for users to do so. Rather than have users transact with each other, the chain permits them to transmute one asset type to another, provably updating their private state without interacting with any other users’ private state. This works as follows.

1. Users create transactions with Swap descriptions that privately burn their input assets and encrypt the amounts to the validators. This description identifies the trading pair , consumes of types from the transaction balance, and contains an encryption of the trade inputs and a swap commitment (analogous to a note commitment) that binds to and a nullifier. Rational traders will choose either or , i.e., trade one asset type for the other type, but the description provides two inputs so that different swap directions cannot be distinguished.

2. Validators sum the encrypted amounts of all swaps in the batch to obtain an encryption of the combined inputs , then decrypt to obtain the batch input without revealing any individual transaction’s input . Then they execute against the trading pool, updating the pool state and obtaining the effective (inclusive of fees) clearing prices and .

3. In a future block, users who created Swap descriptions obtain assets of the new types by creating a Sweep descriptions. This description reveals the block height containing the Swap description, the trading pair , and the nullifier of a swap commitment. Then, it proves inclusion of a swap commitment with nullifier and trade inputs in the note commitment tree for that block, and proves that and . Finally, it adds

of types to the transaction’s balance (e.g., to be consumed by an Output description that creates a new note).

Although sweep descriptions do not reveal the amounts, or which swap’s outputs they claim, they do reveal the block and trading pair, so their anonymity set is considerably smaller than an ordinary shielded value transfer. For this reason, client software should create and submit a transaction with a sweep description immediately after observing that its transaction with a swap description was included in a block, rather than waiting for some future use of the new assets. This ensures that future shielded transactions involving the new assets are not trivially linkable to the swap.

This design reveals only the net flow across a trading pair in each batch, not the amounts of any individual swap. However, this provides no protection if the batch contains a single swap, and limited protection when there are only a few other swaps. This is likely to be an especially severe problem until the protocol has a significant base of active users, so it is worth examining the impact of amount disclosure and potential mitigations.

Assuming that all amounts are disclosed, an attacker could attempt to deanonymize parts of the transaction graph by tracing amounts, using strategies similar to those in An Empirical Analysis of Anonymity in Zcash. That research attempted to deanonymize transactions by analysing movement between Zcash’s transparent and shielded pools, with some notable successes (e.g., identifying transactions associated to the sale of stolen NSA documents). Unlike Zcash, where opt-in privacy means the bulk of the transaction graph is exposed, Penumbra does not have a transparent pool, and the bulk of the transaction graph is hidden, but there are several potential places to try to correlate amounts:

• IBC transfers into Penumbra are analogous to t2z transactions and disclose types and amounts and accounts on the source chain;
• IBC transfers out of Penumbra are analogous to z2t transactions and disclose types and amounts and accounts on the destination chain;
• Each unbonding discloses the precise amount of newly unbonded stake and the validator;
• Each epoch discloses the net amount of newly bonded stake for each validator;
• Liquidity pool deposits disclose the precise type and amount of newly deposited reserves;
• Liquidity pool deposits disclose the precise type and amount of newly withdrawn reserves;

The existence of the swap mechanism potentially makes correlation by amount more difficult, by expanding the search space from all amounts of one type to all combinations of all amounts of any tradeable type and all historic clearing prices. However, assuming rational trades may cut this search space considerably.1

## TODO

• consider the effect of multi-asset support on correlations more deeply.
1

Thanks to Guillermo Angeris for this observation.

# Primitives

Penumbra uses the following cryptographic primitives, described in the following sections:

• The Proof System section describes the choice of proving curve (BLS12-377) and proof system (Groth16, and potentially PLONK in the future);

• The decaf377 section describes decaf377, a parameterization of the Decaf construction defined over the BLS12-377 scalar field, providing a prime-order group that can be used inside or outside of a circuit;

• The Poseidon for BLS12-377 section describes parameter selection for an instantiation of Poseidon, a SNARK-friendly sponge construction, over the BLS12-377 scalar field;

• The Fuzzy Message Detection section describes a construction that allows users to outsource a probabalistic detection capability, allowing a third party to scan and filter the chain on their behalf, without revealing precisely which transactions are theirs.

• The Homomorphic Threshold Decryption section describes the construction used to batch flows of value across transactions.

• The Randomizable Signatures section describes decaf377-rdsa, a variant of the Zcash RedDSA construction instantiated over decaf377, used for binding and spend authorization signatures.

• The Key Agreement section describes decaf377-ka, an instantiation of Diffie-Hellman key agreement over decaf377.

# Proving Considerations

Penumbra needs SNARK proofs. Because the choice of proving system and proving curve can’t really be cleanly separated from the rest of the system choices (e.g., the native field of the proving system informs what embedded curve is available, and how circuit programming is done), large parts of the rest of the system design block on making a choice of proving system.

## Goals

1. Near-term implementation availability. We’d like to ship a useful product first, and iterate and expand it later.

2. High performance for fixed functionality. Penumbra intends to support fixed functionality initially; programmability is a good future goal but isn’t a near-term objective. The fixed functionality should have as high performance as possible.

3. Longer-term flexibility. The choice should ideally not preclude many future choices for later functionality. More precisely, it should not impose high switching costs on future choices.

4. Recursion capability. Penumbra doesn’t currently make use of recursion, but there are a lot of interesting applications it could be used for.

Setup ceremonies are beneficial to avoid for operational reasons, but not for security reasons. A decentralized setup procedure is sufficient for security.

## Options

Proof systems:

• Groth16:
• Pros: high performance, very small proofs, mature system
• Cons: requires a setup for each proof statement
• PLONK:
• Pros: universal setup, still fairly compact proofs, seems to be a point of convergence with useful extensions (plookup, SHPLONK, etc)
• Cons: bigger proofs, worse constants than Groth16
• Halo 2
• Pros: no setup, arbitrary depth recursion
• Cons: bigger proof sizes, primary implementation for the Pallas/Vesta curves which don’t support pairings

Curve choices:

• BLS12-381:

• Pros: very mature, used by Sapling already
• Cons: no easy recursion
• BLS12-377:

• Pros: constructed as part of Zexe to support depth 1 recursion using a bigger parent curve, deployed in Celo, to be deployed in Zexe
• Cons: ?
• Pallas/Vesta:

• Pros: none other than support for Halo 2’s arbitrary recursion
• Cons: no pairings mean they cannot be used for any pairing-based SNARK

## Considerations

Although the choice of proof system (Groth16, Plonk, Halo, Pickles, …) is not completely separable from the choice of proving curve (e.g., pairing-based SNARKs require pairing-friendly curves), to the extent that it is, the choice of the proof system is relatively less important than the choice of proving curve, because it is easier to encapsulate.

The choice of proving curve determines the scalar field of the arithmetic circuit, which determines which curves are efficient to implement in the circuit, which determines which cryptographic constructions can be performed in the circuit, which determines what kind of key material the system uses, which propagates all the way upwards to user-visible details like the address format. While swapping out a proof system using the same proving curve can be encapsulated within an update to a client library, swapping out the proving curve is extremely disruptive and essentially requires all users to generate new addresses and migrate funds.

This means that, in terms of proof system flexibility, the Pallas/Vesta curves are relatively disadvantaged compared to pairing-friendly curves like BLS12-381 or BLS12-377, because they cannot be used with any pairing-based SNARK, or any other pairing-based construction. Realistically, choosing them is committing to using Halo 2.

Choosing BLS12-377 instead of BLS12-381 opens the possibility to do depth-1 recursion later, without meaningfully restricting the near-term proving choices. For this reason, BLS12-377 seems like the best choice of proving curve.

Penumbra’s approach is to first create a useful set of fixed functionality, and generalize to custom, programmable functionality only later. Compared to Sapling, there is more functionality (not just Spend and Output but Delegate, Undelegate, Vote, …), meaning that there are more proof statements. Using Groth16 means that each of these statements needs to have its own proving and verification key, generated through a decentralized setup.

So the advantage of a universal setup (as in PLONK) over per-statement setup (as in Groth16) would be:

1. The setup can be used for additional fixed functionality later;
2. Client software does not need to maintain distinct proving/verification keys for each statement.

(2) is a definite downside, but the impact is a little unclear. As a point of reference, the Sapling spend and output parameters are 48MB and 3.5MB respectively. The size of the spend circuit could be improved using a snark-friendly hash function.

With regard to (1), if functionality were being developed in many independent pieces, doing many setups would impose a large operational cost. But doing a decentralized setup for a dozen proof statements simultaneously does not seem substantially worse than doing a decentralized setup for a single proof statement. So the operational concern is related to the frequency of groups of new statements, not the number of statements in a group. Adding a later group of functionality is easy if the first group used a universal setup. But if it didn’t, the choice of per-statement setup initially doesn’t prevent the use of a universal setup later, as long as the new proof system can be implemented using the same curve.

Because Penumbra plans to have an initial set of fixed functionality, and performance is a concern, Groth16 seems like a good choice, and leaves the door open for a future universal SNARK. Using BLS12-377 opens the door to future recursion, albeit only of depth 1.

# The decaf377 group

Penumbra, like many other zero-knowledge protocols, requires a cryptographic group that can be used inside of an arithmetic circuit. This is accomplished by defining an “embedded” elliptic curve whose base field is the scalar field of the proving curve used by the proof system.

The Zexe paper, which defined BLS12-377, also defined (but did not name) a cofactor-4 Edwards curve defined over the BLS12-377 scalar field for exactly this purpose. However, non-prime-order groups are a leaky abstraction, forcing all downstream constructions to pay attention to correct handling of the cofactor. Although it is usually possible to do so safely, it requires additional care, and the optimal technique for handling the cofactor is different inside and outside of a circuit.

Instead, applying the Decaf construction to this curve gives decaf377, a clean abstraction that provides a prime-order group complete with hash-to-group functionality and whose encoding and decoding functions integrate validation. Although it imposes a modest additional cost in the circuit context, as discussed in Costs and Alternatives, the construction works the same way inside and outside of a circuit and imposes no costs for lightweight, software-only applications, making it a good choice for general-purpose applications.

## Implementation

A work-in-progress implementation of decaf377 can be found here.

# Costs and Alternatives

Arithmetic circuits have a different cost model than software. In the software cost model, software executes machine instructions, but in the circuit cost model, relations are certified by constraints. Unfortunately, while Decaf is a clearly superior choice in the software context, in the circuit context it imposes some additional costs, which must be weighed against its benefits.

At a high level, Decaf implements a prime-order group using a non-prime-order curve by constructing a group quotient. Internally, group elements are represented by curve points, with a custom equality check so that equivalent representatives are considered equal, an encoding function that encodes equivalent representatives as identical bitstrings, and a decoding function that only accepts canonical encodings of valid representatives.

To do this, the construction defines a canonical encoding on a Jacobi quartic curve mod its 2-torsion (a subgroup of size 4) by making two independent sign choices. Then, it uses an isogeny to transport this encoding from the Jacobi quartic to a target curve that will be used to actually implement the group operations. This target curve can be an Edwards curve or a Montgomery curve. The isogenies are only used for deriving the construction. In implementations, all of these steps are collapsed into a single set of formulas that perform encoding and decoding on the target curve.

In other words, one way to think about the Decaf construction is as some machinery that transforms two sign choices into selection of a canonical representative. Ristretto adds extra machinery to handle cofactor 8 by making an additional sign choice.

In the software cost model, where software executes machine instructions, this construction is essentially free, because the cost of both the Decaf and conventional Edwards encodings are dominated by the cost of computing an inverse or an inverse square root, and the cost of the sign checks is insignificant.

However, in the circuit cost model, where relations are certified by various constraints, this is no longer the case. On the one hand, certifying a square root or an inverse just requires checking that or that , which is much cheaper than actually computing or . On the other hand, performing a sign check involves bit-constraining a field element, requiring hundreds of constraints.

## Sign checks

The definition of which finite field elements are considered nonnegative is essentially arbitrary. The Decaf paper suggests three possibilities:

• using the least significant bit, defining to be nonnegative if the least absolute residue for is even;

• using the most significant bit, defining to be nonnegative if the least absolute residue for is in the range ;

• for fields where , using the Legendre symbol, which distinguishes between square and nonsquare elements.

Using the Legendre symbol is very appealing in the circuit context, since it has an algebraic definition and, at least in the case of square elements, very efficient certification. For instance, if square elements are chosen to be nonnegative, then certifying that is nonnegative requires only one constraint, . However, the reason for the restriction to

fields is that and should have different signs, which can only be the case if is nonsquare. Unfortunately, many SNARK-friendly curves, including BLS12-377, are specifically chosen so that for as large a power as possible (e.g., in the case of BLS12-377).

This leaves us with either the LSB or MSB choices. The least significant bit is potentially simpler for implementations, since it is actually the low bit of the encoding of , while the most significant bit isn’t, because it measures from , not a bit position , so it seems to require a comparison or range check to evaluate. However, these choices are basically equivalent, in the following sense:

###### Lemma.1

The most significant bit of is if and only if the least significant bit of is .

###### Proof.

The MSB of is if and only if , but this means that , which is even, is the least absolute residue, so the LSB of is also . On the other hand, the MSB of is if and only if , i.e., if , i.e., if . This means that the least absolute residue of is ; since is even and is odd, this is odd and has LSB .

This means that transforming an LSB check to an MSB check or vice versa requires multiplication by or , which costs at most one constraint.

Checking the MSB requires checking whether a value is in the range . Using Daira Hopwood’s optimized range constraints, the range check costs 2. However, the input to the range check is a bit-constrained unpacking of a field element, not a field element itself. This unpacking costs .

Checking the LSB is no less expensive, because although the check only examines one bit, the circuit must certify that the bit-encoding is canonical. This requires checking that the value is in the range , which also costs , and as before, the unpacking costs .

In other words, checking the sign of a field element costs , or in the case where the field element is already bit-encoded for other reasons. These checks are the dominant cost for encoding and decoding, which both require two sign checks. Decoding from bits costs c. , decoding from a field element costs c. , and encoding costs c. regardless of whether the output is encoded as bits or as a field element.

## Alternative approaches to handling cofactors

Decaf constructs a prime-order group whose encoding and decoding methods perform validation. A more conventional alternative approach is to use the underlying elliptic curve directly, restrict to its prime-order subgroup, and do subgroup validation separately from encoding and decoding. If this validation is done correctly, it provides a prime-order group. However, because validation is an additional step, rather than an integrated part of the encoding and decoding methods, this approach is necessarily more brittle, because each implementation must be sure to do both steps.

In the software cost model, there is no reason to use subgroup validation, because it is both more expensive and more brittle than Decaf or Ristretto. However, in the circuit cost model, there are cheaper alternatives, previously analyzed by Daira Hopwood in the context of Ristretto for JubJub (1, 2).

###### Multiplication by the group order.

The first validation method is to do a scalar multiplication and check that . Because the prime order is fixed, this scalar multiplication can be performed more efficiently using a hardcoded sequence of additions and doublings.

###### Cofactor preimage.

The second validation method provides a preimage in affine coordinates . Because the image of is the prime-order subgroup, checking that satisfies the curve equation and that checks that is in the prime-order subgroup.

In the software context, computing and computing cost about the same, although both are an order of magnitude more expensive than decoding. But in the circuit context, the prover can compute outside of the circuit and use only a few constraints to check the curve equation and two doublings. These costs round to zero compared to sign checks, so the validation is almost free.

The standard “compressed Edwards y” format encodes a point by the -coordinate and a sign bit indicating whether is nonnegative. In software, the cost of encoding and decoding are about the same, and dominated by taking an inverse square root. In circuits, the costs of encoding and decoding are also about the same, but they are instead dominated by a sign check that matches the sign of the recovered -coordinate with the supplied sign bit. This costs c. as above.

## Comparison and discussion

This table considers only approximate costs.

Operationdecaf377Compressed Ed + Preimage
Decode (from bits)400C400C
Decode (from )750C325C
Encode (to bits)750C750C
Encode (to )750C325C

When decoding from or encoding to field elements, the marginal cost of Decaf compared to the compressed Edwards + cofactor preimage is an extra bit-unpacking and range check. While this effectively doubles the number of constraints, the marginal cost of c. is still small relative to other operations like a scalar multiplication, which at 6 constraints per bit is approximately .

When decoding from or encoding to bits, the marginal cost of Decaf disappears. When the input is already bit-constrained, Decaf’s first sign check can reuse the bit constraints, saving c. , but the compressed Edwards encoding must range-check the bits (which Decaf already does), costing c. extra. Similarly, in encoding, Decaf’s second sign check produces bit-constrained variables for free, while the compressed Edwards encoding spends c.

bit-constraining and range-checking them.

However, in the software context, the prime-order validation check costs approximately 10x more than the cost of either encoding. Many applications require use of the embedded group both inside and outside of the circuit, and uses outside of the circuit may have additional resource constraints (for instance, a hardware token creating a signature authorizing delegated proving, etc.).

Performing validation as an additional, optional step also poses additional risks. While a specification may require it to be performed, implementations that skip the check will appear to work fine, and the history of invalid-point attacks (where implementations should, but don’t, check that point coordinates satisfy the curve equation) suggests that structuring validation as an integral part of encoding and decoding is a safer design. This may not be a concern for a specific application with a single, high-quality implementation that doesn’t make mistakes, but it’s less desirable for a general-purpose construction.

In summary, Decaf provides a construction that works the same way inside and outside of a circuit and integrates validation with the encoding, imposing only a modest cost for use in circuits and no costs for lightweight, software-only applications, making it a good choice for general-purpose constructions.

1

I have no idea whether this is common knowledge; I learned of this fact from its use in Mike Hamburg’s Ed448-Goldilocks implementation.

2

The value 73 is computed as:

from itertools import groupby

def cost(k):
return min(k-1, 2)

def constraints(bound):
costs = [cost(len(list(g))+1) for (c, g) in groupby(bound.bits()[:-1]) if c == 1]
return sum(costs)

constraints(ZZ((q-1)/2))


as here.

# Inverse Square Roots

As in the [internet-draft], the decaf377 functions are defined in terms of the following function, which computes the square root of a ratio of field elements, with the special behavior that if the input is nonsquare, it returns the square root of a related field element, to allow reuse of the computation in the hash-to-group setting.

• TODO: actually specify this procedure.

Fix as 2841681278031794617739547238867782961338435681360110683443920362658525667816. Then is a nonsquare -th root of unity.

• TODO: possibly change this value to be convenient for implementations, rather than something that’s convenient for SAGE;

Define sqrt_ratio_zeta(u,v) as:

• (True, ) if and are nonzero, and is square;
• (True, ) if is zero;
• (False, ) if is zero;
• (False, ) if and are nonzero, and is nonsquare.

Since is nonsquare, if is nonsquare, is square. Note that unlike the similar function in the ristretto255/decaf448 [internet-draft], this function does not make any claims about the sign of its output.

• TODO: describe efficient implementation using 2020/1407

# Decoding

Decoding to a point works as follows:

1. Decode s_bytes to a field element , rejecting if the encoding is non-canonical. (If the input is already a field element in the circuit case, skip this step).

2. Check that is nonnegative, or reject (sign check 1).

3. .

4. .

5. (was_square, v) = sqrt_ratio_zeta(1, u_2 * u_1^2), rejecting if was_square is false.

6. if is negative (sign check 2).

7. .

The resulting coordinates are the affine Edwards coordinates of an internal representative of the group element.

• simplify formulas using numerator instead of 1

# Encoding

Given a representative in extended coordinates , encoding works as follows.

1. .

2. (_ignored, v) = sqrt_ratio_zeta(1, u_1 * (-1 - d) * x^2).

3. (sign check 1).

4. .

5. .

6. Set s_bytes to be the canonical little-endian encoding of .

# Group Hash

Elligator can be applied to map a field element to a curve point. The map can be applied once to derive a curve point suitable for use with computational Diffie-Hellman (CDH) challenges, and twice to derive a curve point indistinguishable from random.

In the following section, and are the curve parameters and is a constant defined in the Inverse Square Roots section.

The Elligator map is applied as follows to a field element :

1. .

2. .

3. .

4. .

5. If a square root for exists, then the Jacobi quartic representation of the resulting point is . Else .

The resulting point can be converted from its Jacobi quartic representation to affine Edwards coordinates via:

For single-width hash-to-group (map_to_group_cdh), we apply the above map once. For double-width (map_to_group_uniform) we apply the map to two field elements and add the resulting curve points.

• TODOs: specify optimized version, need to match the choice of quadratic nonresidue with used in invsqrt.

# Test Vectors

## Small generator multiples

This table has hex-encodings of small multiples of the generator :

ElementHex encoding
0000000000000000000000000000000000000000000000000000000000000000
0800000000000000000000000000000000000000000000000000000000000000
b2ecf9b9082d6306538be73b0d6ee741141f3222152da78685d6596efc8c1506
2ebd42dd3a2307083c834e79fb9e787e352dd33e0d719f86ae4adb02fe382409
6acd327d70f9588fac373d165f4d9d5300510274dffdfdf2bf0955acd78da50d
460f913e516441c286d95dd30b0a2d2bf14264f325528b06455d7cb93ba13a0b
ec8798bcbb3bf29329549d769f89cf7993e15e2c68ec7aa2a956edf5ec62ae07
48b01e513dd37d94c3b48940dc133b92ccba7f546e99d3fc2e602d284f609f00
a4e85dddd19c80ecf5ef10b9d27b6626ac1a4f90bd10d263c717ecce4da6570a
1a8fea8cbfbc91236d8c7924e3e7e617f9dd544b710ee83827737fe8dc63ae00
0a0f86eaac0c1af30eb138467c49381edb2808904c81a4b81d2b02a2d7816006
588125a8f4e2bab8d16affc4ca60c5f64b50d38d2bb053148021631f72e99b06
f43f4cefbe7326eaab1584722b1b4860de554b23a14490a03f3fd63a089add0b
76c739a33ffd15cf6554a8e705dc573f26490b64de0c5bd4e4ac75ed5af8e60b
200136952d18d3f6c70347032ba3fef4f60c240d706be2950b4f42f1a7087705
bcb0f922df1c7aa9579394020187a2e19e2d8073452c6ab9b0c4b052aa50f505

## Hash-to-group

This table has input field elements along with the s-coordinate of the output point in the subsequent row, i.e. the expected output of line 1 is on line 2:

Element
2873166235834220037104482467644394559952202754715866736878534498814378075613
7664634080946480262422274939177258683377350652451958930279692300451854076695
707087697291448463178823336344479808196630248514167087002061771344499604401
4040687156656275865790182426684295234932961916167736272791705576788972921292
6012393175004325154204026250961812614679561282637871416475605431319079196219
7255180635786717958849398836099816771666363291918359850790043721721417277258
6609366864829739556945402594963920739176902000316365292959221199804402230199
6875465950337820928985371259904709015074922314668494500948688901607284806973
8104727095953359991885740581210338996163641783979590819054178741553883359577

# Fuzzy Message Detection

By design, privacy-preserving blockchains like Penumbra don’t reveal metadata about the sender or receiver of a transaction. However, this means that users must scan the entire chain to determine which transactions relate to their addresses. This imposes large bandwidth and latency costs on users who do not maintain online replicas of the chain state, as they must “catch up” each time they come online by scanning all transactions that have occurred since their last activity.

Alternatively, users could delegate scanning to a third party, who would monitor updates to the chain state on their behalf and forward only a subset of those updates to the user. This is possible using viewing keys, as in Zcash, but viewing keys represent the capability to view all activity related to a particular address, so they can only be delegated to trusted third parties.

Instead, it would be useful to be able to delegate only a probabilistic detection capability. Analogous to a Bloom filter, this would allow a detector to identify all transactions related to a particular address (no false negatives), while also identifying unrelated transactions with some false positive probability. Unlike viewing capability, detection capability would not include the ability to view the details of a transaction, only a probabilistic association with a particular address. This is the problem of fuzzy message detection (FMD), analyzed by Beck, Len, Miers, and Green in their paper Fuzzy Message Detection, which proposes a cryptographic definition of fuzzy message detection and three potential constructions.

This section explores how Penumbra could make use of fuzzy message detection:

• In Sender and Receiver FMD, we propose a generalization of the original definition where the false positive probability is set by the sender instead of the receiver, and discusses why this is useful.

• In Constructing S-FMD, we realize the new definition using a variant of one of the original FMD constructions, and extend it in two ways:

1. to support arbitrarily precise detection with compact, constant-size keys;
2. to support diversified detection, allowing multiple, publicly unlinkable addresses to be scanned by a single detection key.

Unfortunately, these extensions are not mutually compatible, so we only use the first one, and record the second for posterity.

## Acknowledgements

Thanks to George Tankersley and Sarah Jamie Lewis for discussions on this topic (and each independently suggesting the modifications to realize S-FMD), to Gabrielle Beck for discussions about the paper and ideas about statistical attacks, and to Guillermo Angeris for pointers on analyzing information disclosure.

The goal of detection capability is to be able to filter the global stream of state updates into a local stream of state updates that includes all updates related to a particular address, without identifying precisely which updates those are. The rate of updates on this filtered stream should be bounded below, to ensure that there is a large enough anonymity set, and bounded above, so that users processing the stream have a constant and manageable amount of work to process it and catch up with the current chain state. This means that the detection precision must be adaptive to the global message rates: if the false positive rate is too low, the filtered stream will have too few messages, and if it is too high, it will have too many messages.

However, this isn’t possible using the paper’s original definition of fuzzy message detection, because the false positive probability is chosen by the receiver, who has no way to know in advance what probability will produce a correctly sized stream of messages. One way to address this problem is to rename the original FMD definition as Receiver FMD (R-FMD), and tweak it to obtain Sender FMD (S-FMD), in which the sender chooses the detection probability.

The paper’s original definition of fuzzy message detection is as a tuple of algorithms (KeyGen, CreateClue, Extract, Examine).1 The receiver uses KeyGen to generate a root key and a clue key. A sender uses the receiver’s clue key as input to CreateClue to produce a clue. The Extract algorithm takes the root key and a false positive rate (chosen from some set of supported rates), and produces a detection key. The Examine algorithm uses a detection key to examine a clue and produce a detection result.

This scheme should satisfy certain properties, formalizations of which can be found in the paper:

###### Correctness.

Valid matches must always be detected by Examine; i.e., there are no false negatives.

###### Fuzziness.

Invalid matches should produce false positives with probability approximately , as long as the clues and detection keys were honestly generated.

###### Detection Ambiguity.

An adversarial detector must be unable to distinguish between a true positive and a false positive, as long as the clues and detection keys were honestly generated.

In this original definition, the receiver has control over how much detection precision they delegate to a third party, because they choose the false positive probability when they extract a detection key from their root key. This fits with the idea of attenuating credentials, and intuitively, it seems correct that the receiver should control how much information they reveal to their detector. But the information revealed to their detector is determined by both the false positive probability and the amount of other messages that can function as cover traffic. Without knowing the extent of other activity on the system, the receiver has no way to make a principled choice of the detection precision to delegate.

## Sender FMD

To address this problem, we generalize the original definition (now Receiver FMD) to Sender FMD, in which the false positive probability is chosen by the sender.

S-FMD consists of a tuple of algorithms (KeyGen, CreateClue, Examine). Like R-FMD, CreateClue creates a clue and Examine takes a detection key and a clue and produces a detection result. As discussed in the next section, S-FMD can be realized using tweaks to either of the R-FMD constructions in the original paper.

Unlike R-FMD, the false positive rate is set by the sender, so CreateClue takes both the false positive rate and the receiver’s clue key. Because the false postive rate is set by the sender, there is no separation of capability between the root key and a detection key, so KeyGen outputs a clue key and a detection key, and Extract disappears.

In R-FMD, flag ciphertexts are universal with respect to the false positive rate, which is applied to the detection key; in S-FMD, the false positive rate is applied to the flag ciphertext and the detection key is universal.

Unlike R-FMD, S-FMD allows detection precision to be adaptive, by having senders use a (consensus-determined) false positive parameter. This parameter should vary as the global message rates vary, so that filtered message streams have a bounded rate, and it should be the same for all users, so that messages cannot be distinguished by their false positive rate.

1

We change terminology from the FMD paper; the paper calls detection and clue keys the secret and public keys respectively, but we avoid this in favor of capability-based terminology that names keys according to the precise capability they allow. The “clue” terminology is adopted from the Oblivious Message Retrieval paper of Zeyu Liu and Eran Tromer; we CreateClue and Examine clues rather than Flag and Test flag ciphertexts.

# Constructing S-FMD

The original FMD paper provides three constructions of R-FMD. The first two realize functionality for restricted false-positive probabilities of the form ; the third supports arbitrary fractional probabilities using a much more complex and expensive construction.

The first two schemes, R-FMD1 and R-FMD2, are constructed similarly to a Bloom filter: the CreateClue procedure encrypts a number of 1 bits, and the Examine procedure uses information in a detection key to check whether some subset of them are 1, returning true if so and false otherwise. The false positive probability is controlled by extracting only a subset of the information in the root key into the detection key, so that it can only check a subset of the bits encoded in the clue. We focus on R-FMD2, which provides more compact ciphertexts and chosen-ciphertext security.

In this section, we:

• recall the construction of R-FMD2, changing notation from the paper;
• extend S-FMD2 to use deterministic derivation, allowing 32-byte flag and detection keys to support arbitrary-precision false positive probabilities;
• extend S-FMD2 to support diversified detection, allowing multiple, unlinkable flag keys to be detected by a single detection key (though, as this extension conflicts with the preceding one, we do not use it);
• summarize the resulting construction and describe how it can be integrated with a Sapling- or Orchard-style key hierarchy.

## The R-FMD2 construction

First, we recall the paper’s construction of R-FMD2, changing from multiplicative to additive notation and adjusting variable names to be consistent with future extensions.

The construction supports restricted false positive values for , a global parameter determining the minimum false positive rate. is a group of prime order and and are hash functions.

###### R-FMD2/KeyGen

Choose a generator . For , choose and compute . Return the root key and the clue key .

###### R-FMD2/Extract

On input and root key , parse and return as the detection key.

###### R-FMD2/Flag

On input , first parse , then proceed as follows:

1. Choose and compute .
2. Choose and compute .
3. For each , compute
1. a key bit ;
2. a ciphertext bit .
4. Compute .
5. Compute .

Return the clue .

###### R-FMD2/Examine

On input , , first parse , , then proceed as follows:

1. Compute .
2. Recompute as .
3. For each , compute
1. a key bit ;
2. a plaintext bit .

If all plaintext bits , return (match); otherwise, return .

To see that the Test procedure detects true positives, first note that since and ,

so the detector will recompute the used by the sender; then, since , the detector’s is the same as the sender’s .

On the other hand, if the clue was created with a different clue key, the resulting will be random bits, giving the desired false positive probability.

One way to understand the components of the construction is as follows: the message consists of bits of a Bloom filter, each encrypted using hashed ElGamal with a common nonce and nonce commitment . The points are included in the ElGamal hash, ensuring that any change to either will result in a random bit on decryption. The pair act as a public key and signature for a one-time signature on the ciphertext as follows: the sender constructs the point as the output of a chameleon hash with basis

on input , then uses the hash trapdoor (i.e., their knowledge of the discrete log relation ) to compute a collision involving , a hash of the rest of the ciphertext. The collision acts as a one-time signature with public key .

The fact that the generator was selected at random in each key generation is not used by the construction and doesn’t seem to play a role in the security analysis; in fact, the security proof in Appendix F has the security game operate with a common generator. In what follows, we instead assume that is a global parameter.

## From R-FMD2 to S-FMD2

Changing this construction to the S-FMD model is fairly straightforward: rather than having the sender encrypt bits of the Bloom filter and only allow the detector to decrypt of them, we have the sender only encrypt bits of the Bloom filter and allow the detector to decrypt all potential bits. As noted in the previous section, this means that in the S-FMD context, there is no separation of capability between the root key and the detection key. For this reason, we skip the Extract algorithm and (conceptually) merge the root key into the detection key.

The S-FMD2 construction then works as follows:

###### S-FMD2/KeyGen

For , choose and compute . Return the detection key and the clue key .

###### S-FMD2/CreateClue

On input clue key , first parse , then proceed as follows:

1. Choose and compute .
2. Choose and compute .
3. For each , compute
1. a key bit ;
2. a ciphertext bit .
4. Compute .
5. Compute .

Return the clue . (We include

explicitly rather than have it specified implicitly by to reduce the risk of implementation confusion).

###### S-FMD2/Examine

On input detection key and clue , first parse and , then proceed as follows:

1. Compute .
2. Recompute as .
3. For each , compute
1. a key bit ;
2. a plaintext bit .

If all plaintext bits , return (match); otherwise, return .

The value of is included in the ciphertext hash to ensure that the encoding of the ciphertext bits is non-malleable. Otherwise, the construction is identical.

## Compact clue and detection keys

One obstacle to FMD integration is the size of the clue keys. Clue keys are required to create clues, so senders who wish to create clues need to obtain the receiver’s clue key. Ideally, the clue keys would be bundled with other address data, so that clues can be included with all messages, rather than being an opt-in mechanism with a much more limited anonymity set.

However, the size of clue keys makes this difficult. A clue key supporting false positive probabilities down to requires group elements; for instance, supporting probabilities down to

with a 256-bit group requires 448-byte flag keys. It would be much more convenient to have smaller keys.

One way to do this is to use a deterministic key derivation mechanism similar to BIP32 to derive a sequence of child keypairs from a single parent keypair, and use that sequence as the components of a flag / detection keypair. To do this, we define a hash function . Then given a parent keypair , child keys can be derived as

with . Unlike BIP32, there is only one sequence of keypairs, so there is no need for a chain code to namespace different levels of derivation, although of course the hash function should be domain-separated.

Applying this to S-FMD2, the scalar becomes the detection key, and the point becomes the clue key. The former detection key is renamed the expanded detection key , and the former clue key is renamed the expanded clue key ; these are derived from the detection and clue keys respectively.

In addition, it is no longer necessary to select the minimum false positive probability as a global parameter prior to key generation, as the compact keys can be extended to arbitrary length (and thus arbitrary false positive probabilities).

Deterministic derivation is only possible in the S-FMD setting, not the R-FMD setting, because S-FMD does not require any separation of capability between the component keys used for each bit of the Bloom filter. Public deterministic derivation does not allow separation of capability: given and any child secret , it’s possible to recover the parent secret and therefore the entire subtree of secret keys. Thus, it cannot be used for R-FMD, where the detection key is created by disclosing only some of the bits of the root key.

## Diversified detection: from S-FMD2 to S-FMD2-d

The Sapling design provides viewing keys that support diversified addresses: a collection of publicly unlinkable addresses whose activity can be scanned by a common viewing key. For integration with Sapling, as well as other applications, it would be useful to support diversified detection: a collection of publicly unlinkable diversified clue keys that share a common detection key. This collection is indexed by data called a diversifier; each choice of diversifier gives a different diversified clue key corresponding to the same detection key.

In the Sapling design, diversified addresses are implemented by selecting the basepoint of the key agreement protocol as the output of a group-valued hash function whose input is the diversifier. Because the key agreement mechanism only makes use of the secret key (the incoming viewing key) and the counterparty’s ephemeral public key, decryption is independent of the basepoint, and hence the choice of diversifier, allowing a common viewing key to decrypt messages sent to any member of its family of diversified transmission (public) keys.

A similar approach can be applied to S-FMD2, but some modifications to the tag mechanism are required to avoid use of the basepoint in the Test procedure. Unfortunately, this extension is not compatible with compact clue and detection keys, so although it is presented here for posterity, it is not used in Penumbra.

Let be the set of diversifiers, and let be a group-valued hash function. At a high level, our goal is to replace use of a common basepoint with a diversified basepoint for some .

Our goal is to have diversified clue keys, so we first replace by in KeyGen, so that and the components of the clue key are parameterized by the diversifier . The sender will compute the key bits as

and the receiver computes them as

If instead of , then and the middle component will match.

But then the sender needs to construct as in order to have a known discrete log relation between and . This is a problem, because is recomputed by the receiver, but if the receiver must recompute it as , detection is bound to the choice of diversifier, and we need detection to be independent of the choice of diversifier.

The root of this problem is that the chameleon hash uses basis , so the receiver’s computation involves the basepoint . To avoid it, we can use a basis , where . is used in place of , but is computed as , i.e., as the chameleon hash . The sender’s collision then becomes , allowing the detector to recompute as

The point must be included in the clue, increasing its size, but detection no longer involves use of a basepoint (diversified or otherwise).

Concretely, this mechanism works as follows, splitting out generation of clue keys (diversifier-dependent) from generation of detection keys:

###### S-FMD2-d/KeyGen

For , choose , and return the detection key .

###### S-FMD2-d/Diversify

On input detection key and diversifier , first parse . Then compute the diversified base , and use it to compute . Return the diversified clue key .

###### S-FMD2-d/CreateClue

On input diversified clue key and diversifier , first parse , then proceed as follows:

1. Compute the diversified base .
2. Choose and compute .
3. Choose and compute .
4. Choose and compute .
5. For each , compute
1. a key bit ;
2. a ciphertext bit .
6. Compute .
7. Compute .

Return the clue .

###### S-FMD2-d/Examine

On input detection key and clue , first parse and , then proceed as follows:

1. Compute .
2. Recompute as .
3. For each , compute
1. a key bit ;
2. a plaintext bit .

If all plaintext bits , return (match); otherwise, return .

Unfortunately, this extension does not seem to be possible to combine with the compact clue and detection keys extension described above. The detection key components must be derived from the parent secret and (the hash of) public data, but diversified detection requires that different public data (clue keys) have the same detection key components.

Of the two extensions, compact clue keys are much more important, because they allow clue keys to be included in an address, and this allows message detection to be a default behavior of the system, rather than an opt-in behavior of a select few users.

# Parameter Considerations

• How should the false positive rate be determined? In some epoch, let

be the false positive rate, be the total number of messages, be the number of true positives for some detection key, and be the number of detections for that detection key. Then

and ideally should be chosen so that:

1. is bounded above;
2. When is within the range of “normal use”, is close enough to that it’s difficult for a detector to distinguish (what does this mean exactly?);
• The notion of detection ambiguity only requires that true and false positives be ambiguous in isolation. In practice, however, a detector has additional context: the total number of messages, the number of detected messages, and the false positive probability. What’s the right notion in this context?

• What happens when an adversary manipulates (diluting the global message stream) or (by sending extra messages to a target address)? There is some analogy here to flashlight attacks, although with the critical difference that flashlight attacks on decoy systems degrade privacy of the transactions themselves, whereas here the scope is limited to transaction detection.

# Homomorphic Threshold Decryption

A sketch of one construction is as follows.

Write a value in radix with signed coefficients, i.e., as with . Encode each coefficient as and use ElGamal encryption to form the ciphertext

Each ElGamal ciphertext consists of two group elements; if group elements can be encoded in 32 bytes, this gives a 384-byte ciphertext. To decrypt , use ElGamal decryption to obtain the group elements , and then use a lookup table to recover from , or fail if the value is unknown.

This can in principle be done inside of a zkSNARK circuit if the underlying group is an embedded elliptic curve, together with certification that the ciphertext was correctly formed with in-range coefficients.

Addition and subtraction of ciphertexts are done componentwise, using the homomorphic property of ElGamal encryptions, and the fact that .

Adding ciphertexts of values whose coefficients were bounded as results in a sum whose coefficients are bounded as . Choosing and accounting for sign means that a lookup table of size is sufficient to decrypt sums of up to 128 well-formed ciphertexts. Sums of more than 128 ciphertexts can be handled by summing blocks of 128, decrypting the block sum, and summing the plaintexts.

Unfortunately, this method reveals slightly more information about a sum of encrypted summands than would be ideal. Ideally, it would reveal only the sum of the encrypted summands, but in fact it reveals the sum of each radix- chunk, without carrying between them. Carrying collapses information about the summands, but that information is revealed by this scheme. This seems unlikely to be a problem in practice, but it is worth quantifying.

## TODO

• the bounds above are a ballpark estimation; refine them and make them precise
• work out integration with ABCI++ protocol phases

# Randomizable Signatures

Penumbra’s signatures are provided by decaf377-rdsa, a variant of the Zcash RedDSA construction instantiated using the decaf377 group. These are Schnorr signatures, with two additional properties relevant to Penumbra:

1. They support randomization of signing and verification keys. Spending a note requires use of the signing key that controls its spend authorization, but if the same spend verification key were included in multiple transactions, they would be linkable. Instead, both the signing and verification keys are kept secret1, and each spend description includes a randomization of the verification key, together with a proof that the randomized verification key was derived from the correct spend verification key.

2. They support addition and subtraction of signing and verification keys. This property is used for binding signatures, which bind zero-knowledge proofs to the transaction they were intended for and enforce conservation of value.

## decaf377-rdsa

Let be the decaf377 group of prime order . Keys and signatures are parameterized by a domain . Each domain has an associated generator . Currently, there are two defined domains: and . The hash function is instantiated by using blake2b with the personalization string decaf377-rdsa---, treating the 64-byte output as the little-endian encoding of an integer, and reducing that integer modulo .

A signing key is a scalar . The corresponding verification key is the group element .

###### Sign

On input message m with signing key , verification key , and domain :

1. Generate 80 random bytes.
2. Compute the nonce as .
3. Commit to the nonce as .
4. Compute the challenge as .
5. Compute the response as .
6. Output the signature R_bytes || s_bytes.
###### Verify

On input message m, verification key A_bytes, and signature sig_bytes:

1. Parse from A_bytes, or fail if A_bytes is not a valid (hence canonical) decaf377 encoding.
2. Parse sig_bytes as R_bytes || s_bytes and the components as and , or fail if they are not valid (hence canonical) encodings.
3. Recompute the challenge as .
4. Check the verification equation , rejecting the signature if it is not satisfied.

## SpendAuth signatures

The first signature domain used in Penumbra is for spend authorization signatures. The basepoint is the conventional decaf377 basepoint 0x0800000....

Spend authorization signatures support randomization:

###### Randomize.SigningKey

Given a randomizer , the randomized signing key is .

###### Randomize.VerificationKey

Given a randomizer , the randomized verification key is .

Randomizing a signing key and then deriving the verification key associated to the randomized signing key gives the same result as randomizing the original verification key (with the same randomizer).

## Binding signatures

The second signature domain used in Penumbra is for binding signatures. The basepoint is the result of converting blake2b(b"decaf377-rdsa-binding") to an element and applying decaf377’s CDH map-to-group method.

Since the verification key corresponding to the signing key is , adding and subtracting signing and verification keys commutes with derivation of the verification key, as desired.

WARNING: the decaf377 CDH map is unstable, so it may change in the future and change binding signatures with it.

1

This situation is a good example of why it’s better to avoid the terms “public key” and “private key”, and prefer more precise terminology that names keys according to the cryptographic capability they represent, rather than an attribute of how they’re commonly used. In this example, the verification key should not be public, since it could link different transactions.

# Cryptographic Protocol

This chapter describes the cryptographic design of the Penumbra protocol, described in more detail in the following sections:

• The Addressses and Keys section describes the Penumbra key hierarchy and diversified addresses.

• The Notes section describes Penumbra’s private notes and their contents.

## Notation

We use the following notation in this chapter:

• denotes the decaf377 group;
• denotes the BLS12-377 scalar field of order ;
• denotes the decaf377 scalar field of order .

The key hierarchy is a modification of the design in Zcash Sapling. The main differences are that it is designed for use with BLS12-377 rather than BLS12-381, that it uses Poseidon as a hash and PRF, decaf377 as the embedded group, and that it includes support for fuzzy message detection.

All key material within a particular spend authority is ultimately derived from a single root secret. The internal key components and their derivations are described in the following sections:

• Spending Keys describes derivation of the spending key from the root key material;
• Viewing Keys describes derivation of the full, incoming, and outgoing viewing keys;
• Addresses and Detection Keys describes derivation of multiple, publicly unlinkable addresses for a single spending authority, each with their own detection key.

The diagram in the Overview section describes the key hierarchy from an external, functional point of view. Here, we zoom in to the internal key components, whose relations are depicted in the following diagram. Each internal key component is represented with a box; arrows depict key derivation steps, and diamond boxes represent key derivation steps that combine multiple components.

flowchart BT
direction TB;
d2[d];
pk_d;
ck_d;
end;
subgraph DTK[Detection Key]
dtk_d;
end;
subgraph IVK[Incoming\nViewing Key]
ivk;
dk;
end;
subgraph OVK[Outgoing\nViewing Key]
ovk;
end;
subgraph FVK[Full Viewing Key]
ak;
nk;
end;
subgraph SK[Spending Key]
direction LR;
seed;
end;

seed --> nk;

ak & nk --> fvk_blake{ };
ak & nk --> fvk_poseidon{ };
fvk_blake --> ovk & dk;
fvk_poseidon --> ivk;

index(Diversifier Index);
d1[d];

d1 --- d2;

index --> aes_ff1{ };
dk ---> aes_ff1{ };
aes_ff1 --> d1;

d1 & ivk --> scmul_ivk{ };
scmul_ivk --> pk_d;

d1 & ivk --> dtk_blake{ };
dtk_blake --> dtk_d;
dtk_d --> ck_d;


# Spending Keys

The root key material for a particular spend authority is a 32-byte random seed. The seed value is used to derive

• , the spend authorization key, and
• , the nullifier key,

as follows. Define prf_expand(label, key, input) as BLAKE2b-512 with personalization label, key key, and input input. Define from_le_bytes(bytes) as the function that interprets its input bytes as an integer in little-endian order. Then

ask = from_le_bytes(prf_expand("Penumbra_ExpndSd", seed, 0)) mod r
nk  = from_le_bytes(prf_expand("Penumbra_ExpndSd", seed, 0)) mod q


The spending key consists of seed and ask. When using a hardware wallet or similar custody mechanism, the spending key remains on the device.

The spend authorization key is used as a decaf377-rdsa signing key.1 The corresponding verification key is the spend verification key . The spend verification key and the nullifier key are used to create the full viewing key described in the next section.

1

Note that it is technically possible for the derived or to be , but this happens with probability approximately , so we ignore this case, as, borrowing phrasing from Adam Langley, it happens significantly less often than malfunctions in the CPU instructions we’d use to check it.

# Viewing Keys

Each spending key has a corresponding collection of viewing keys, which represent different subsets of capability around creating and viewing transactions:

• the full viewing key represents all capability except for spend authorization, including viewing all incoming and outgoing transactions and creating proofs;
• the incoming viewing key represents the capability to scan the blockchain for incoming transactions;
• the outgoing viewing key represents the capability to recover information about sent transactions.

## Full Viewing Keys

The full viewing key consists of two components:

• , the spend verification key, a decaf377-rdsa verification key;
• , the nullifier key.

Their derivation is described in the previous section.

The spend verification key is used (indirectly) to verify spend authorizations. Using it directly would allow spends signed by the same key to be linkable to each other, so instead, each spend is signed by a randomization of the spend authorization key, and the proof statement includes a proof that the verification key provided in the spend description is a randomization of the (secret) spend verification key.

The nullifier key is used to derive the nullifier of a note sent to a particular user. Nullifiers are used for double-spend protection and so must be bound to the note contents, but it’s important that nullifiers cannot be derived solely from the contents of a note, so that the sender of a note cannot learn when the recipient spent it.

The full viewing key can be used to create transaction proofs, as well as to view all activity related to its spending authority.

## Incoming and Outgoing Viewing Keys

The incoming viewing key consists of two components:

• , the diversifier key, and
• , the incoming viewing key (component)1.

The diversifier key is used to derive diversifiers, which allow a single spend authority to have multiple, publicly unlinkable diversified addresses, as described in the next section. The incoming viewing key is used to scan the chain for transactions sent to every diversified address simultaneously.

The outgoing viewing key consists of one component:

• , the outgoing viewing key (component).

These components are derived as follows.

The and are not intended to be derived in a circuit. Define prf_expand(label, key, input) as BLAKE2b-512 with personalization label, key key, and input input. Define to_le_bytes(input) as the function that encodes an input integer in little-endian byte order. Define decaf377_encode(element) as the function that produces the canonical encoding of a decaf377 element. Then

dk  = prf_expand(b"Penumbra_ExpndVK", to_le_bytes(nk), decaf377_encode(ak))[ 0..32]
ovk = prf_expand(b"Penumbra_ExpndVK", to_le_bytes(nk), decaf377_encode(ak))[32..64]


The is intended to be derived in a circuit. Define poseidon_hash_2(label, x1, x2) to be rate-2 Poseidon hashing of inputs x1, x2 with the capacity initialized to the domain separator label. Define from_le_bytes(bytes) as the function that interprets its input bytes as an integer in little-endian order. Define decaf377_s(element) as the function that produces the -value used to encode the provided decaf377 element. Then

ivk = poseidon_hash_2(from_le_bytes(b"penumbra.derive.ivk"), nk, decaf377_s(ak)) mod r


i.e., we treat the Poseidon output (an element) as an integer, and reduce it modulo to get an element of .

1

In general, we refer to the combination of and as the “incoming viewing key” and the component just as , using the modifier “component” in case of ambiguity.

Rather than having a single address for each spending authority, Penumbra allows the creation of many different publicly unlinkable diversified addresses. An incoming viewing key can scan transactions to every diversified address simultaneously, so there is no per-address scanning cost. In addition, Penumbra attaches a detection key to each address, allowing a user to outsource probabilistic transaction detection to a relatively untrusted third-party scanning service.

## Diversifiers

Addresses are parameterized by diversifiers, 11-byte tags used to derive up to distinct addresses for each spending authority. The diversifier is included in the address, so it should be uniformly random. To ensure this, diversifiers are indexed by a diversifier index ; the -th diversifier is the encryption of using AES-FF1 with the diversifier key .1

Each diversifier is used to generate a diversified basepoint as

where

performs hash-to-group for decaf377 as follows: first, apply BLAKE2b-512 with personalization b"Penumbra_Divrsfy" to the input, then, interpret the 64-byte output as an integer in little-endian byte order and reduce it modulo , and finally, use the resulting element as input to the decaf377 CDH map-to-group method.

## Detection Keys

Each address has an associated detection key, allowing the creator of the address to delegate a probabilistic detection capability to a third-party scanning service.

The detection key consists of one component,

• , the detection key (component)2,

derived as follows. Define prf_expand(label, key, input) as BLAKE2b-512 with personalization label, key key, and input input. Define from_le_bytes(bytes) as the function that interprets its input bytes as an integer in little-endian order, and to_le_bytes as the function that encodes an integer to little-endian bytes. Then

dtk_d = from_le_bytes(prf_expand(b"PenumbraExpndFMD", to_le_bytes(ivk), d))


Each payment address has three components:

• the diversifier ;
• the transmission key , a decaf377 element;
• the clue key , a decaf377 element.

The diversifier is derived from a diversifier index as described above. The diversifier with index is the default diversifier, and corresponds to the default payment address.

The transmission key is derived as , where is the diversified basepoint.

The clue key is is derived as , where is the conventional decaf377 basepoint.

The raw binary encoding of a payment address is the 75-byte string d || pk_d || ck_d. This string is encoded with Bech32m with the following human-readable prefixes:

• penumbra for mainnet, and
• penumbra_tnXYZ_ for testnets, where XYZ is the current testnet number padded to three decimal places.
1

This convention is not enforced by the protocol; client software could in principle construct diversifiers in another way, although deviating from this mechanism risks compromising privacy.

2

As in the previous section, we use the modifier “component” to distinguish between the internal key component and the external, opaque key.

# Value Commitments

Penumbra’s shielded pool can record arbitrary assets. These assets either originate on Penumbra itself, or, more commonly, originate on other IBC-connected chains. To record arbitrary assets and enforce value balance between them, we draw on ideas originally proposed for Zcash and adapt them to the Cosmos context.

## Asset types and asset IDs

To be precise, we define:

• an amount to be an untyped quantity of some asset;
• an asset type to be an ADR001-style denomination trace uniquely identifying a cross-chain asset and its provenance, such as:
• denom (native chain A asset)
• transfer/channelToA/denom (chain B representation of chain A asset)
• transfer/channelToB/transfer/channelToA/denom (chain C representation of chain B representation of chain A asset)
• an asset ID to be a fixed-size hash of an asset type;
• a value to be a typed quantity, i.e., an amount and an asset id.

Penumbra deviates slightly from ADR001 in the definition of the asset ID. While ADR001 defines the IBC asset ID as the SHA-256 hash of the denomination trace, Penumbra hashes to a field element, so that asset IDs can be more easily used inside of a zk-SNARK circuit. Specifically, define from_le_bytes(bytes) as the function that interprets its input bytes as an integer in little-endian order, and hash(label, input) as BLAKE2b-512 with personalization label on input input. Then asset IDs are computed as

asset_id = from_le_bytes(hash(b"Penumbra_AssetID", asset_type)) mod q


Asset IDs are used in internal data structures, but users should be presented with asset names. To avoid having to reverse the hash function, the chain maintains a lookup table of known asset IDs and the corresponding asset types. This table can be exhaustive, since new assets either moved into the chain via IBC transfers from a transparent zone, or were created at genesis.

## Value Generators

Each asset ID has an associated value generator . The value generator is computed as , where is a hash-to-group function constructed by first applying rate-1 Poseidon hashing with domain separator from_le_bytes(b"penumbra.value.generator") and then the decaf377 CDH map-to-group method.

## Homomorphic Commitments

We use the value generator associated to an asset ID to construct homomorphic commitments to (typed) value. To do this, we first define the blinding generator as

V_tilde = decaf377_map_to_group_cdh(from_le_bytes(blake2b(b"decaf377-rdsa-binding")))


The commitment to value , i.e., amount of asset , with blinding factor , is the Pedersen commitment

These commitments are homomorphic, even for different asset types, say values and :

Alternatively, this can be thought of as a commitment to a (sparse) vector recording the amount of every possible asset type, almost all of whose coefficients are zero.

## Binding Signatures

Finally, we’d like to be able to prove that a certain value commitment is a commitment to . One way to do this would be to prove knowledge of an opening to the commitment, i.e., producing such that But this is exactly what it means to create a Schnorr signature for the verification key , because a Schnorr signature is a proof of knowledge of the signing key in the context of the message.

Therefore, we can prove that a value commitment is a commitment to by treating it as a decaf377-rdsa verification key and using the corresponding signing key (the blinding factor) to sign a message. This also gives a way to bind value commitments to a particular context (e.g., a transaction), by using the context as the message to be signed, in order to, e.g., ensure that value commitments cannot be replayed across transactions.

# Notes

In Penumbra, value is associated with notes. Once a note is included in a block, it is a valid receipt of value that can later be redeemed. Users can spend their existing notes if they hold the associated spend key and the note has not already been spent. They can transmit value by creating a new note (or “output”) for another user. Privacy is achieved by encrypting the notes such that a validator or other observer only sees ciphertext, and does not learn the value or recipient of the note.

The note and its contents are described in further detail in Note Plaintexts.

# Note Plaintexts

Plaintext notes contain:

• the value to be transmitted which consists of an integer amount along with a scalar (32 bytes) identifying the asset.
• the note blinding factor , a scalar value, which will later be used when computing note commitments.
• the diversifier of the destination address, described in more detail in the Addressses section.
• the diversified transmission key of the destination address, also described in more detail in the Addressses section.

The note can only be spent by the holder of the spend key that corresponds to the diversified transmission key in the note.