# ACP-103: Dynamic Fees (/docs/acps/103-dynamic-fees) --- title: "ACP-103: Dynamic Fees" description: "Details for Avalanche Community Proposal 103: Dynamic Fees" edit_url: https://github.com/avalanche-foundation/ACPs/edit/main/ACPs/103-dynamic-fees/README.md --- | ACP | 103 | | :--- | :--- | | **Title** | Add Dynamic Fees to the P-Chain | | **Author(s)** | Dhruba Basu ([@dhrubabasu](https://github.com/dhrubabasu)), Alberto Benegiamo ([@abi87](https://github.com/abi87)), Stephen Buttolph ([@StephenButtolph](https://github.com/StephenButtolph)) | | **Status** | Activated ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/104)) | | **Track** | Standards | ## Abstract Introduce a dynamic fee mechanism to the P-Chain. Preview a future transition to a multidimensional fee mechanism. ## Motivation Blockchains are resource-constrained environments. Users are charged for the execution and inclusion of their transactions based on the blockchain's transaction fee mechanism. The mechanism should fluctuate based on the supply of and demand for said resources to serve as a deterrent against spam and denial-of-service attacks. With a fixed fee mechanism, users are provided with simplicity and predictability but network congestion and resource constraints are not taken into account. There is no incentive for users to withhold transactions since the cost is fixed regardless of the demand. The fee does not adjust the execution and inclusion fee of transactions to the market clearing price. The C-Chain, in [Apricot Phase 3](https://medium.com/avalancheavax/apricot-phase-three-c-chain-dynamic-fees-432d32d67b60), employs a dynamic fee mechanism to raise the price during periods of high demand and lowering the price during periods of low demand. As the price gets too expensive, network utilization will decrease, which drops the price. This ensures the execution and inclusion fee of transactions closely matches the market clearing price. The P-Chain currently operates under a fixed fee mechanism. To more robustly handle spikes in load expected from introducing the improvements in [ACP-77](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/77-reinventing-subnets/README.md), it should be migrated to a dynamic fee mechanism. The X-Chain also currently operates under a fixed fee mechanism. However, due to the current lower usage and lack of new feature introduction, the migration of the X-Chain to a dynamic fee mechanism is deferred to a later ACP to reduce unnecessary additional technical complexity. ## Specification ### Dimensions There are four dimensions that will be used to approximate the computational cost of, or "gas" consumed in, a transaction: 1. Bandwidth $B$ is the amount of network bandwidth used for transaction broadcast. This is set to the size of the transaction in bytes. 2. Reads $R$ is the number of state/database reads used in transaction execution. 3. Writes $W$ is the number of state/database writes used in transaction execution. 4. Compute $C$ is the total amount of compute used to verify and execute a transaction, measured in microseconds. The gas consumed $G$ in a transaction is: $$G = B + 1000R + 1000W + 4C$$ A future ACP could remove the merging of these dimensions to granularly meter usage of each resource in a multidimensional scheme. ### Mechanism This mechanism aims to maintain a target gas consumption $T$ per second and adjusts the fee based on the excess gas consumption $x$, defined as the difference between the current gas consumption and $T$. Prior to the activation of this mechanism, $x$ is initialized: $$x = 0$$ At the start of building/executing block $b$, $x$ is updated: $$x = \max(x - T \cdot \Delta{t}, 0)$$ Where $\Delta{t}$ is the number of seconds between $b$'s block timestamp and $b$'s parent's block timestamp. The gas price for block $b$ is: $$M \cdot \exp\left(\frac{x}{K}\right)$$ Where: - $M$ is the minimum gas price - $\exp\left(x\right)$ is an approximation of $e^x$ following the EIP-4844 specification ```python # Approximates factor * e ** (numerator / denominator) using Taylor expansion def fake_exponential(factor: int, numerator: int, denominator: int) -> int: i = 1 output = 0 numerator_accum = factor * denominator while numerator_accum > 0: output += numerator_accum numerator_accum = (numerator_accum * numerator) // (denominator * i) i += 1 return output // denominator ``` - $K$ is a constant to control the rate of change of the gas price After processing block $b$, $x$ is updated with the total gas consumed in the block $G$: $$x = x + G$$ Whenever $x$ increases by $K$, the gas price increases by a factor of `~2.7`. If the gas price gets too expensive, average gas consumption drops, and $x$ starts decreasing, dropping the price. The gas price constantly adjusts to make sure that, on average, the blockchain consumes $T$ gas per second. A [token bucket](https://en.wikipedia.org/wiki/Token_bucket) is employed to meter the maximum rate of gas consumption. Define $C$ as the capacity of the bucket, $R$ as the amount of gas to add to the bucket per second, and $r$ as the amount of gas currently in the bucket. Prior to the activation of this mechanism, $r$ is initialized: $$r = 0$$ At the beginning of processing block $b$, $r$ is set: $$r = \min\left(r + R \cdot \Delta{t}, C\right)$$ Where $\Delta{t}$ is the number of seconds between $b$'s block timestamp and $b$'s parent's block timestamp. The maximum gas consumed in a given $\Delta{t}$ is $r + R \cdot \Delta{t}$. The upper bound across all $\Delta{t}$ is $C + R \cdot \Delta{t}$. After processing block $b$, the total gas consumed in $b$, or $G$, will be known. If $G \gt r$, $b$ is considered an invalid block. If $b$ is a valid block, $r$ is updated: $$r = r - G$$ A block gas limit does not need to be set as it is implicitly derived from $r$. The parameters at activation are: | Parameter | P-Chain Configuration| | - | - | | $T$ - target gas consumed per second | 50,000 | | $M$ - minimum gas price | 1 nAVAX | | $K$ - gas price update constant | 2_164_043 | | $C$ - maximum gas capacity | 1,000,000 | | $R$ - gas capacity added per second | 100,000 | $K$ was chosen such that at sustained maximum capacity ($R=100,000$ gas/second), the fee rate will double every ~30 seconds. As the network gains capacity to handle additional load, this algorithm can be tuned to increase the gas consumption rate. #### A note on $e^x$ There is a subtle reason why an exponential adjustment function was chosen: The adjustment function should be _equally_ reactive irrespective of the actual fee. Define $b_n$ as the current block's gas fee, $b_{n+1}$ as the next block's gas fee, and $x$ as the excess gas consumption. Let's use a linear adjustment function: $$b_{n+1} = b_n + 10x$$ Assume $b_n = 100$ and the current block is 1 unit above target utilization, or $x = 1$. Then, $b_{n+1} = 100 + 10 \cdot 1 = 110$, an increase of `10%`. If instead $b_n = 10,000$, $b_{n+1} = 10,000 + 10 \cdot 1 = 10,010$, an increase of `0.1%`. The fee is _less_ reactive as the fee increases. This is because the rate of change _does not scale_ with $x$. Now, let's use an exponential adjustment function: $$b_{n+1} = b_n \cdot e^x$$ Assume $b_n = 100$ and the current block is 1 unit above target utilization, or $x = 1$. Then, $b_{n+1} = 100 \cdot e^1 \approx 271.828$, an increase of `171%`. If instead $b_n = 10,000$, $b_{n+1} = 10,000 \cdot e^1 \approx 27,182.8$, an increase of `171%` again. The fee is _equally_ reactive as the fee increases. This is because the rate of change _scales_ with $x$. ### Block Building Procedure When a transaction is constructed on the P-Chain, the amount of $AVAX burned is given by `sum($AVAX outputs) - sum($AVAX inputs)`. The amount of gas consumed by the transaction can be deterministically calculated after construction. Dividing the amount of $AVAX burned by the amount of gas consumed yields the maximum gas price that the transaction can pay. Instead of using a FIFO queue for the mempool (like the P-Chain does now), the mempool should use a priority queue ordered by the maximum gas price of each transaction. This ensures that higher paying transactions are included first. ## Backwards Compatibility Modification of a fee mechanism is an execution change and requires a mandatory upgrade for activation. Implementers must take care to not alter the execution behavior prior to activation. After this ACP is activated, any transaction issued on the P-Chain must account for the fee mechanism defined above. Users are responsible for reconstructing their transactions to include a larger fee for quicker inclusion when the fee increases. ## Reference Implementation ACP-103 was implemented into AvalancheGo behind the `Etna` upgrade flag. The full body of work can be found tagged with the `acp103` label [here](https://github.com/ava-labs/avalanchego/pulls?q=is%3Apr+label%3Aacp103). ## Security Considerations The current fixed fee mechanism on the X-Chain and P-Chain does not robustly handle spikes in load. Migrating the P-Chain to a dynamic fee mechanism will ensure that any additional load caused by demand for new P-Chain features (such as those introduced in [ACP-77](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/77-reinventing-subnets/README.md)) is properly priced given allotted processing capacity. The X-Chain, in comparison, currently has significantly lower usage, making it less likely for the demand for blockspace on it to exceed the current static fee rates. If necessary or desired, a future ACP can reuse the mechanism introduced here to add dynamic fee rates to the X-Chain. ## Acknowledgements Thank you to [@aaronbuchwald](https://github.com/aaronbuchwald) and [@patrick-ogrady](https://github.com/patrick-ogrady) for providing feedback prior to publication. Thank you to the authors of [EIP-4844](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-4844.md) for creating the fee design that inspired the above mechanism. ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-108: Evm Event Importing (/docs/acps/108-evm-event-importing) --- title: "ACP-108: Evm Event Importing" description: "Details for Avalanche Community Proposal 108: Evm Event Importing" edit_url: https://github.com/avalanche-foundation/ACPs/edit/main/ACPs/108-evm-event-importing/README.md --- | ACP | 108 | | :--- | :--- | | **Title** | EVM Event Importing Standard | | **Author(s)** | Michael Kaplan ([@mkaplan13](https://github.com/mkaplan13)) | | **Status** | Proposed ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/114)) | | **Track** | Best Practices Track | ## Abstract Defines a standard smart contract interface and abstract implementation for importing EVM events from any blockchain within Avalanche using [Avalanche Warp Messaging](https://docs.avax.network/build/cross-chain/awm/overview). ## Motivation The implementation of Avalanche Warp Messaging within `coreth` and `subnet-evm` exposes a [mechanism for getting authenticated hashes of blocks](https://github.com/ava-labs/subnet-evm/blob/master/contracts/contracts/interfaces/IWarpMessenger.sol#L43) that have been accepted on blockchains within Avalanche. Proofs of acceptance of blocks, such as those introduced in [ACP-75](https://github.com/avalanche-foundation/ACPs/tree/main/ACPs/75-acceptance-proofs), can be used to prove arbitrary events and state changes that occured in those blocks. However, there is currently no clear standard for using authenticated block hashes in smart contracts within Avalanche, making it difficult to build applications that leverage this mechanism. In order to make effective use of authenticated block hashes, contracts must be provided encoded block headers that match the authenticated block hashes and also Merkle proofs that are verified against the state or receipts root contained in the block header. With a standard interface and abstract contract implemetation that handles the authentication of block hashes and verification of Merkle proofs, smart contract developers on Avalanche will be able to much more easily create applications that leverage data from other Avalanche blockchains. These type of cross-chain application do not require any direct interaction on the source chain. ## Specification ### Event Importing Interface We propose that smart contracts importing EVM events emitted by other blockchains within Avalanche implement the following interface. #### Methods Imports the EVM event uniquely identified by the source blockchain ID, block header, transaction index, and log index. The `blockHeader` must be validated to match the authenticated block hash from the `sourceBlockchainID`. The specification for EVM block headers can be found [here](https://github.com/ava-labs/subnet-evm/blob/master/core/types/block.go#L73). The `txIndex` identifies the key of receipts trie of the given block header that the `receiptProof` must prove inclusion of. The value obtained by verifying the `receiptProof` for that key is the encoded transaction receipt. The specification for EVM transaction receipts can be found [here](https://github.com/ava-labs/subnet-evm/blob/master/core/types/receipt.go#L62). The `logIndex` identifies which event log from the given transaction receipt is to be imported. Must emit an `EventImported` event upon success. ```solidity function importEvent( bytes32 sourceBlockchainID, bytes calldata blockHeader, uint256 txIndex, bytes[] calldata receiptProof, uint256 logIndex ) external; ``` This interface does not require that the Warp precompile is used to authenticate block hashes. Implementations could: - Use the Warp precompile to authenticate block hashes provided directly in the transaction calling `importEvent`. - Check previously authenticated block hashes using an external contract. - Allows for a block hash to be authenticated once and used in arbitrarily many transactions afterwards. - Allows for alternative authentication mechanisms to be used, such as trusted oracles. #### Events Must trigger when an EVM event is imported. ```solidity event EventImported( bytes32 indexed sourceBlockchainID, bytes32 indexed sourceBlockHash, address indexed loggerAddress, uint256 txIndex, uint256 logIndex ); ``` ### Event Importing Abstract Contract Applications importing EVM events emitted by other blockchains within Avalanche should be able to use a standard abstract implementation of the `importEvent` interface. This abstract implementation must handle: - Authenticating block hashes from other chains. - Verifying that the encoded `blockHeader` matches the imported block hash. - Verifying the Merkle `receiptProof` for the given `txIndex` against the receipt root of the provided `blockHeader`. - Decoding the event log identified by `logIndex` from the receipt obtained from verifying the `receiptProof`. As noted above, implementations could directly use the Warp precompile's `getVerifiedWarpBlockHash` interface method for authenticating block hashes, as is done in the reference implementation [here](https://github.com/ava-labs/event-importer-poc/blob/main/contracts/src/EventImporter.sol#L51). Alternatively, implementations could use the `sourceBlockchainID` and `blockHeader` provided in the parameters to check with an external contract that the block has been accepted on the given chain. The specifics of such an external contract are outside the scope of this ACP, but for illustrative purposes, this could look along the lines of: ```solidity bool valid = blockHashRegistry.checkAuthenticatedBlockHash( sourceBlockchainID, keccack256(blockHeader) ); require(valid, "Invalid block header"); ``` Inheriting contracts should only need to define the logic to be executed when an event is imported. This is done by providing an implementation of the following internal function, called by `importEvent`. ```solidity function _onEventImport(EVMEventInfo memory eventInfo) internal virtual; ``` Where the `EVMEventInfo` struct is defined as: ```solidity struct EVMLog { address loggerAddress; bytes32[] topics; bytes data; } struct EVMEventInfo { bytes32 blockchainID; uint256 blockNumber; uint256 txIndex; uint256 logIndex; EVMLog log; } ``` The `EVMLog` struct is meant to match the `Log` type definition in the EVM [here](https://github.com/ava-labs/subnet-evm/blob/master/core/types/log.go#L39). ## Reference Implementation See reference implementation on [Github here](https://github.com/ava-labs/event-importer-poc). In addition to implementing the interface and abstract contract described above, the reference implementation shows how transactions can be constructed to import events using Warp block hash signatures. ## Open Questions See [here](https://github.com/ava-labs/event-importer-poc?tab=readme-ov-file#open-questions-and-considerations). ## Security Considerations The correctness of a contract using block hashes to prove that a specific event was emitted within that block depends on the correctness of: 1. The mechanism for authenticating that a block hash was finalized on another blockchain. 2. The Merkle proof validation library used to prove that a specific transaction receipt was included in the given block. For considerations on using Avalanche Warp Messaging to authenticate block hashes, see [here](https://github.com/avalanche-foundation/ACPs/tree/main/ACPs/30-avalanche-warp-x-evm#security-considerations). To improve confidence in the correctness of the Merkle proof validation used in implementations, well-audited and widely used libraries should be used. ## Acknowledgements Using Merkle proofs to verify events/state against root hashes is not a new idea. Protocols such as [IBC](https://ibc.cosmos.network/v8/), [Rainbow Bridge](https://github.com/Near-One/rainbow-bridge), and [LayerZero](https://layerzero.network/publications/LayerZero_Whitepaper_V1.1.0.pdf), among others, have previously suggested using Merkle proofs in a similar manner. Thanks to [@aaronbuchwald](https://github.com/aaronbuchwald) for proposing the `getVerifiedWarpBlockHash` interface be included in the AWM implemenation within Avalanche EVMs, which enables this type of use case. ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-113: Provable Randomness (/docs/acps/113-provable-randomness) --- title: "ACP-113: Provable Randomness" description: "Details for Avalanche Community Proposal 113: Provable Randomness" edit_url: https://github.com/avalanche-foundation/ACPs/edit/main/ACPs/113-provable-randomness/README.md --- | ACP | 113 | | :------------ | :------------------------------------------------------------------------------------ | | **Title** | Provable Virtual Machine Randomness | | **Author(s)** | Tsachi Herman [http://github.com/tsachiherman](http://github.com/tsachiherman) | | **Status** | Stale ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/142)) | | **Track** | Standards | ## Future Work This ACP was marked as stale due to its documented security concerns. In order to safely utilize randomness produced by this mechanism, the consumer of the randomness must: 1. Define a security threshold `x` which is the maximum number of consecutive blocks which can be proposed by a malicious entity. 2. After committing to a request for randomness, the consumer must wait for `x` blocks. 3. After waiting for `x` blocks, the consumer must verify that the randomness was not biased during the `x` blocks. 4. If the randomness was biased, it would be insufficient to request randomness again, as this would allow the malicious block producer to discard any randomness that it did not like. If using the randomness mechanism proposed in this ACP, the consumer of the randomness must be able to terminate the request for randomness in such a way that no participant would desire the outcome. Griefing attacks would likely result from such a construction. ### Alternative Mechanisms There are alternative mechanisms that would not result in such security concerns, such as: - Utilizing a deterministic threshold signature scheme to finalize a block in consensus would allow the threshold signature to be used during the execution of the block. - Utilizing threshold commit-reveal schemes that guarantee that committed values will always be revealed in a timely manner. However, these mechanisms are likely too costly to be introduced into the Avalanche Primary Network due to its validator set size. It is left to a future ACP to specify the implementation of one of these alternative schemes for L1 networks with smaller sized validator sets. ## Abstract Avalanche offers developers flexibility through subnets and EVM-compatible smart contracts. However, the platform's deterministic block execution limits the use of traditional random number generators within these contracts. To address this, a mechanism is proposed to generate verifiable, non-cryptographic random number seeds on the Avalanche platform. This method ensures uniformity while allowing developers to build more versatile applications. ## Motivation Reliable randomness is essential for building exciting applications on Avalanche. Games, participant selection, dynamic content, supply chain management, and decentralized services all rely on unpredictable outcomes to function fairly. Randomness also fuels functionalities like unique identifiers and simulations. Without a secure way to generate random numbers within smart contracts, Avalanche applications become limited. Avalanche's traditional reliance on external oracles for randomness creates complexity and bottlenecks. These oracles inflate costs, hinder transaction speed, and are cumbersome to integrate. As Avalanche scales to more Subnets, this dependence on external systems becomes increasingly unsustainable. A solution for verifiable random number generation within Avalanche solves these problems. It provides fair randomness functionality across the chains, at no additional cost. This paves the way for a more efficient Avalanche ecosystem. ## Specification ### Changes Summary The existing Avalanche protocol breaks the block building into two parts : external and internal. The external block is the Snowman++ block, whereas the internal block is the actual virtual machine block. To support randomness, a BLS based VRF implementation is used, that would be recursively signing its own signatures as its message. Since the BLS signatures are deterministic, they provide a great way to construct a reliable VRF. For proposers that do not have a BLS key associated with their node, the hash of the signature from the previous round is used in place of their signature. In order to bootstrap the signatures chain, a missing signature would be replaced with a byte slice that is the hash product of a verifiable and trustable seed. The changes proposed here would affect the way a blocks are validated. Therefore, when this change gets implemented, it needs to be deployed as a mandatory upgrade. ``` +-----------------------+ +-----------------------+ | Block n | <-------- | Block n+1 | +-----------------------+ +-----------------------+ | VRF-Sig(n) | | VRF-Sig(n+1) | | ... | | ... | +-----------------------+ +-----------------------+ +-----------------------+ +-----------------------+ | VM n | | VM n+1 | +-----------------------+ +-----------------------+ | VRF-Out(n) | | VRF-Out(n+1) | +-----------------------+ +-----------------------+ VRF-Sig(n+1) = Sign(VRF-Sig(n), Block n+1 proposer's BLS key) VRF-Out(n) = Hash(VRF-Sig(n)) ``` ### Changes Details #### Step 1. Adding BLS signature to proposed blocks ```go type statelessUnsignedBlock struct { … vrfSig []byte `serialize:”true”` } ``` #### Step 2. Populate signature When a block proposer attempts to build a new block, it would need to use the parent block as a reference. The `vrfSig` field within each block is going to be daisy-chained to the `vrfSig` field from it's parent block. Populating the `vrfSig` would following this logic: 1. The current proposer has a BLS key a. If the parent block has an empty `vrfSig` signature, the proposer would sign the bootStrappingBlockSignature with its BLS key. See the bootStrappingBlockSignature details below. This is the base case. b. If the parent block does not have an empty `vrfSig` signature, that signature would be signed using the proposer’s BLS key. 2. The current proposer does not have a BLS key a. If the parent block has a non-empty `vrfSig` signature, the proposer would set the proposed block `vrfSig` to the 32 byte hash result of the following preimage: ``` +-------------------------+----------+------------+ | prefix : | [8]byte | "rng-derv" | +-------------------------+----------+------------+ | vrfSig : | [96]byte | 96 bytes | +-------------------------+----------+------------+ ``` b. If the parent block has an empty `vrfSig` signature, the proposer would leave the `vrfSig` on the new block empty. The bootStrappingBlockSignature that would be used above is the hash of the following preimage: ``` +-----------------------+----------+------------+ | prefix : | [8]byte | "rng-root" | +-----------------------+----------+------------+ | networkID: | uint32 | 4 bytes | +-----------------------+----------+------------+ | chainID : | [32]byte | 32 bytes | +-----------------------+----------+------------+ ``` #### Step 3. Signature Verification This signature verification would perform the exact opposite of what was done in step 2, and would verify the cryptographic correctness of the operation. Validating the `vrfSig` would following this logic: 1. The proposer has a BLS key a. If the parent block's `vrfSig` was non-empty , then the `vrfSig` in the proposed block is verified to be a valid BLS signature of the parent block's `vrfSig` value for the proposer's BLS public key. b. If the parent block's `vrfSig` was empty, then a BLS signature verification of the proposed block `vrfSig` against the proposer’s BLS public key and bootStrappingBlockSignature would take place. 2. The proposer does not have a BLS key a. If the parent block had a non-empty `vrfSig`, then the hash of the preimage ( as described above ) would be compared against the proposed `vrfSig`. b. If the parent block has an empty `vrfSig` then the proposer's `vrfSig` would be validated to be empty. #### Step 4. Extract the VRF Out and pass to block builders Calculating the VRF Out would be done by hashing the preimage of the following struct: ``` +-----------------------+----------+------------+ | prefix : | [8]byte | "vrfout " | +-----------------------+----------+------------+ | vrfout: | [96]byte | 96 bytes | +-----------------------+----------+------------+ ``` Before calculating the VRF Out, the method needs to explicitly check the case where the `vrfSig` is empty. In that case, the output of the VRF Out needs to be empty as well. ## Backwards Compatibility The above design has taken backward compatibility considerations. The chain would keep working as before, and at some point, would have the newly added `vrfSig` populated. From usage perspective, each VM would need to make its own decision on whether it should use the newly provided random seed. Initially, this random seed would be all zeros - and would get populated once the feature rolled out to a sufficient number of nodes. Also, as mentioned in the summary, these changes would necessitate a network upgrade. ## Reference Implementation A full reference implementation has not been provided yet. It will be provided once this ACP is considered `Implementable`. ## Security Considerations Virtual machine random seeds, while appearing to offer a source of randomness within smart contracts, fall short when it comes to cryptographic security. Here's a breakdown of the critical issues: - Limited Permutation Space: The number of possible random values is derived from the number of validators. While no validator, nor a validator set, would be able to manipulate the randomness into any single value, a nefarious actor(s) might be able to exclude specific numbers. - Predictability Window: The seed value might be accessible to other parties before the smart contract can benefit from its uniqueness. This predictability window creates a vulnerability. An attacker could potentially observe the seed generation process and predict the sequence of "random" numbers it will produce, compromising the entire cryptographic foundation of your smart contract. Despite these limitations appearing severe, attackers face significant hurdles to exploit them. First, the attacker can't control the random number, limiting the attack's effectiveness to how that number is used. Second, a substantial amount of AVAX is needed. And last, such an attack would likely decrease AVAX's value, hurting the attacker financially. One potential attack vector involves collusion among multiple proposers to manipulate the random number selection. These attackers could strategically choose to propose or abstain from proposing blocks, effectively introducing a bias into the system. By working together, they could potentially increase their chances of generating a random number favorable to their goals. However, the effectiveness of this attack is significantly limited for the following reasons: - Limited options: While colluding attackers expand their potential random number choices, the overall pool remains immense (2^256 possibilities). This drastically reduces their ability to target a specific value. - Protocol's countermeasure: The protocol automatically eliminates any bias introduced by previous proposals once an honest proposer submits their block. - Detectability: Exploitation of this attack vector is readily identifiable. A successful attack necessitates coordinated collusion among multiple nodes to synchronize their proposer slots for a specific block height ( the proposer slot order are known in advance ). Subsequent to this alignment, a designated node constructs the block proposal. The network maintains a record of the proposer slot utilized for each block. A value of zero for the proposer slot unequivocally indicates the absence of an exploit. Increasing values correlate with a heightened risk of exploitation. It is important to note that non-zero slot numbers may also arise from transient network disturbances. While this attack is theoretically possible, its practical impact is negligible due to the vast number of potential outcomes and the protocol's inherent safeguards. ## Open Questions ### How would the proposed changes impact the proposer selection and their inherit bias ? The proposed modifications will not influence the selection process for block proposers. Proposers retain the ability to determine which transactions are included in a block. This inherent proposer bias remains unchanged and is unaffected by the proposed changes. ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-118: Warp Signature Request (/docs/acps/118-warp-signature-request) --- title: "ACP-118: Warp Signature Request" description: "Details for Avalanche Community Proposal 118: Warp Signature Request" edit_url: https://github.com/avalanche-foundation/ACPs/edit/main/ACPs/118-warp-signature-request/README.md --- | ACP | 118 | | :--- | :--- | | **Title** | Warp Signature Interface Standard | | **Author(s)** | Cam Schultz ([@cam-schultz](https://github.com/cam-schultz)) | | **Status** | Activated ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/123)) | | **Track** | Best Practices Track | ## Abstract Proposes a standard [AppRequest](https://github.com/ava-labs/avalanchego/blob/master/proto/p2p/p2p.proto#L385) payload format type for requesting Warp signatures for the provided bytes, such that signatures may be requested in a VM-agnostic manner. To make this concrete, this standard type should be defined in AvalancheGo such that VMs can import it at the source code level. This will simplify signature aggregator implementations by allowing them to depend only on AvalancheGo for message construction, rather than individual VM codecs. ## Motivation Warp message signatures consist of an aggregate BLS signature composed of the individual signatures of a subnet's validators. Individual signatures need to be retreivable by the party that wishes to construct an aggregate signature. At present, this is left to VMs to implement, as is the case with [Subnet EVM](https://github.com/ava-labs/subnet-evm/blob/v0.6.7/plugin/evm/message/signature_request.go#20) and [Coreth](https://github.com/ava-labs/coreth/blob/v0.13.6-rc.0/plugin/evm/message/signature_request.go#L20) This creates friction in applications that are intended to operate across many VMs (or distinct implementations of the same VM). As an example, the reference Warp message relayer implementation, [awm-relayer](https://github.com/ava-labs/awm-relayer), fetches individual signatures from validators and aggregates them before sending the Warp message to its destination chain for verification. However, Subnet EVM and Coreth have distinct codecs, requiring the relayer to [switch](https://github.com/ava-labs/awm-relayer/blob/v1.4.0-rc.0/relayer/application_relayer.go#L372) according to the target codebase. Another example is ACP-75, which aims to implement acceptance proofs using Warp. The signature aggregation mechanism is not [specified](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/75-acceptance-proofs/README.md#signature-aggregation), which is a blocker for that ACP to be marked implementable. Standardizing the Warp Signature Request interface by defining it as a format for `AppRequest` message payloads in AvalancheGo would simplify the implementation of ACP-75, and streamline signature aggregation for out-of-protocol services such as Warp message relayers. ## Specification We propose the following types, implemented as Protobuf types that may be decoded from the `AppRequest`/`AppResponse` `app_bytes` field. By way of example, this approach is currently used to [implement](https://github.com/ava-labs/avalanchego/blob/v1.11.10-status-removal/proto/sdk/sdk.proto#7) and [parse](https://github.com/ava-labs/avalanchego/blob/v1.11.10-status-removal/network/p2p/gossip/message.go#22) gossip `AppRequest` types. - `SignatureRequest` includes two fields. `message` specifies the payload that the returned signature should correspond to, namely a serialized unsigned Warp message. `justification` specifies arbitrary data that the requested node may use to decide whether or not it is willing to sign `message`. `justification` may not be required by every VM implementation, but `message` should always contain the bytes to be signed. It is up to the VM to define the validity requirements for the `message` and `justification` payloads. ```protobuf message SignatureRequest { bytes message = 1; bytes justification = 2; } ``` - `SignatureResponse` is the corresponding `AppResponse` type that returns the requested signature. ```protobuf message SignatureResponse { bytes signature = 1; } ``` ### Handlers For each of the above types, VMs must implement corresponding `AppRequest` and `AppResponse` handlers. The `AppRequest` handler should be [registered](https://github.com/ava-labs/avalanchego/blob/v1.11.10-status-removal/network/p2p/network.go#L173) using the canonical handler ID, defined as `2`. ## Use Cases Generally speaking, `SignatureRequest` can be used to request a signature over a Warp message by serializing the unsigned Warp message into `message`, and populating `justification` as needed. ### Sign a known Warp Message Subnet EVM and Coreth store messages that have been seen (i.e. on-chain message sent through the [Warp Precompile](https://github.com/ava-labs/subnet-evm/tree/v0.6.7/precompile/contracts/warp) and [off-chain](https://github.com/ava-labs/subnet-evm/blob/v0.6.7/plugin/evm/config.go#L226) Warp messages) such that a signature over that message can be provided on request. `SignatureRequest` can be used for this case by specifying the Warp message in `message`. The queried node may then look up the Warp message in its database and return the signature. In this case, `justification` is not needed. ### Attest to an on-chain event Subnet EVM and Coreth also support attesting to block hashes via Warp, by serving signature requests made using the following `AppRequest` type: ``` type BlockSignatureRequest struct { BlockID ids.ID } ``` `SignatureRequest` can achieve this by specifying an unsigned Warp message with the `BlockID` as the payload, and serializing that message into `message`. `justification` may optionally be used to provide additional context, such as a the block height of the given block ID. ### Confirm that an event did not occur With [ACP-77](https://github.com/avalanche-foundation/ACPs/tree/main/ACPs/77-reinventing-subnets), Subnets will have the ability to manage their own validator sets. The Warp message payload contained in a `RegisterSubnetValidatorTx` includes an `expiry`, after which the specified validation ID (i.e. a unique hash over the Subnet ID, node ID, stake weight, and expiry) becomes invalid. The Subnet needs to know that this validation ID is expired so that it can keep its locally tracked validator set in sync with the P-Chain. We also assume that the P-Chain will not persist expired or invalid validation IDs. We can use `SignatureRequest` to construct a Warp message attesting that the validation ID expired. We do so by serializing an unsigned Warp message containing the validation ID into `message`, and providing the validation ID hash preimage in `justification` for the P-Chain to reconstruct the expired validation ID. ## Security Considerations VMs have full latitude when implementing `SignatureRequest` handlers, and should take careful consideration of what `message` payloads their implementation should be willing to sign, given a `justification`. Some considerations include, but are not limited to: - Input validation. Handlers should validate `message` and `justification` payloads to ensure that they decode to coherent types, and that they contain only expected data. - Signature DoS. AvalancheGo's peer-to-peer networking stack implements message rate limiting to mitigate the risk of DoS, but VMs should also consider the cost of parsing and signing a `message` payload. - Payload collision. `message` payloads should be implemented as distinct types that do not overlap with one another within the context of signed Warp messages from the VM. For instance, a `message` payload specifying 32-byte hash may be interpreted as a transaction hash, a block hash, or a blockchain ID. ## Backwards Compatibility This change is backwards compatible for VMs, as nodes running older versions that do not support the new message types will simply drop incoming messages. ## Reference Implementation A reference implementation containing the Protobuf types and the canonical handler ID can be found [here](https://github.com/ava-labs/avalanchego/pull/3218). ## Acknowledgements Thanks to @joshua-kim, @iansuvak, @aaronbuchwald, @michaelkaplan13, and @StephenButtolph for discussion and feedback on this ACP. ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-125: Basefee Reduction (/docs/acps/125-basefee-reduction) --- title: "ACP-125: Basefee Reduction" description: "Details for Avalanche Community Proposal 125: Basefee Reduction" edit_url: https://github.com/avalanche-foundation/ACPs/edit/main/ACPs/125-basefee-reduction/README.md --- | ACP | 125 | | :--- | :--- | | **Title** | Reduce C-Chain minimum base fee from 25 nAVAX to 1 nAVAX | | **Author(s)** | Stephen Buttolph ([@StephenButtolph](https://github.com/StephenButtolph)), Darioush Jalali ([@darioush](https://github.com/darioush)) | | **Status** | Activated ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/127)) | | **Track** | Standards | ## Abstract Reduce the minimum base fee on the Avalanche C-Chain from 25 nAVAX to 1 nAVAX. ## Motivation With dynamic fees, the gas price is supposed to be a result of a continuous auction such that the consumed gas per second converges to the target gas usage per second. When dynamic fees were first introduced, safeguards were added to ensure the mechanism worked as intended, such as a relatively high minimum gas price and a maximum gas price. The maximum gas price has since been entirely removed. The minimum gas price has been reduced significantly. However, the base fee is often observed pinned to this minimum. This shows that it is higher than what the market demands, and therefore it is artificially reducing network usage. ## Specification The dynamic fee calculation currently must enforce a minimum base fee of 25 nAVAX. This change proposes reducing the minimum base fee to 1 nAVAX upon the next network upgrade activation. ## Backwards Compatibility Modifies the consensus rules for the C-Chain, therefore it requires a network upgrade. ## Reference Implementation A draft implementation of this ACP for the coreth VM can be found [here](https://github.com/ava-labs/coreth/pull/604/files). ## Security Considerations Lower gas costs may increase state bloat. However, we note that the dynamic fee algorithm responded appropriately during periods of high use (such as Dec. 2023), which gives reasonable confidence that enforcing a 25 nAVAX minimum fee is no longer necessary. ## Open Questions N/A ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-13: Subnet Only Validators (/docs/acps/13-subnet-only-validators) --- title: "ACP-13: Subnet Only Validators" description: "Details for Avalanche Community Proposal 13: Subnet Only Validators" edit_url: https://github.com/avalanche-foundation/ACPs/edit/main/ACPs/13-subnet-only-validators/README.md --- | ACP | 13 | | :--- | :--- | | **Title** | Subnet-Only Validators (SOVs) | | **Author(s)** | Patrick O'Grady ([contact@patrickogrady.xyz](mailto:contact@patrickogrady.xyz)) | | **Status** | Stale | | **Track** | Standards | | **Superseded-By** | [ACP-77](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/77-reinventing-subnets/README.md) | ## Abstract Introduce a new type of staker, Subnet-Only Validators (SOVs), that can validate an Avalanche Subnet and participate in Avalanche Warp Messaging (AWM) without syncing or becoming a Validator on the Primary Network. Require SOVs to pay a refundable fee of 500 $AVAX on the P-Chain to register as a Subnet Validator instead of staking at least 2000 $AVAX, the minimum requirement to become a Primary Network Validator. Preview a future transition to Pay-As-You-Go Subnet Validation and $AVAX-Augmented Subnet Security. _This ACP does not modify/deprecate the existing Subnet Validation semantics for Primary Network Validators._ ## Motivation Each node operator must stake at least 2000 $AVAX ($20k at the time of writing) to first become a Primary Network Validator before they qualify to become a Subnet Validator. Most Subnets aim to launch with at least 8 Subnet Validators, which requires staking 16000 $AVAX ($160k at time of writing). All Subnet Validators, to satisfy their role as Primary Network Validators, must also [allocate 8 AWS vCPU, 16 GB RAM, and 1 TB storage](https://github.com/ava-labs/avalanchego/blob/master/README.md#installation) to sync the entire Primary Network (X-Chain, P-Chain, and C-Chain) and participate in its consensus, in addition to whatever resources are required for each Subnet they are validating. Avalanche Warp Messaging (AWM), the native interoperability mechanism for the Avalanche Network, provides a way for Subnets to communicate with each other/C-Chain without a trusted intermediary. Any Subnet Validator must be able to register a BLS key and participate in AWM, otherwise a Subnet may not be able to generate a BLS Multi-Signature with sufficient participating stake. Regulated entities that are prohibited from validating permissionless, smart contract-enabled blockchains (like the C-Chain) can’t launch a Subnet because they can’t opt-out of Primary Network Validation. This deployment blocker prevents a large cohort of Real World Asset (RWA) issuers from bringing unique, valuable tokens to the Avalanche Ecosystem (that could move between C-Chain <-> Subnets using AWM/Teleporter). A widely validated Subnet that is not properly metered could destabilize the Primary Network if usage spikes unexpectedly. Underprovisioned Primary Network Validators running such a Subnet may exit with an OOM exception, see degraded disk performance, or find it difficult to allocate CPU time to P/X/C-Chain validation. The inverse also holds for Subnets with the Primary Network (where some undefined behavior could bring a Subnet offline). Although the fee paid to the Primary Network to operate a Subnet does not go up with the amount of activity on the Subnet, the fixed, upfront cost of setting up a Subnet Validator on the Primary Network deters new projects that prefer smaller, even variable, costs until demand is observed. _Unlike L2s that pay some increasing fee (usually denominated in units per transaction byte) to an external chain for data availability and security as activity scales, Subnets provide their own security/data availability and the only cost operators must pay from processing more activity is the hardware cost of supporting additional load._ Elastic Subnets allow any community to weight Subnet Validation based on some staking token and reward Subnet Validators with high uptime with said staking token. However, there is no way for $AVAX holders on the Primary Network to augment the security of such Subnets. ## Specification ### Required Changes 1) Introduce a new type of staker, Subnet-Only Validators (SOVs), that can validate an Avalanche Subnet and participate in Avalanche Warp Messaging (AWM) without syncing or becoming a Validator on the Primary Network 2) Introduce a refundable fee (called a "lock") of 500 $AVAX that nodes must pay to become an SOV 3) Introduce a non-refundable fee of 0.1 $AVAX that SOVs must pay to become an SOV 4) Introduce a new transaction type on the P-Chain to register as an SOV (i.e. `AddSubnetOnlyValidatorTx`) 5) Add a mode to ANCs that allows SOVs to optionally disable full Primary Network verification (only need to verify P-Chain) 6) ANCs track IPs for SOVs to ensure Subnet Validators can find peers whether or not they are Primary Network Validators 7) Provide a guaranteed rate limiting allowance for SOVs like Primary Network Validators Because SOVs do not validate the Primary Network, they will not be rewarded with $AVAX for "locking" the 500 $AVAX required to become an SOV. This enables people interested in validating Subnets to opt for a lower upfront $AVAX commitment and lower infrastructure costs instead of $AVAX rewards. Additionally, SOVs will only be required to sync the P-chain (not X/C-Chain) to track any validator set changes in their Subnet and to support Cross-Subnet communication via AWM (see “Primary Network Partial Sync” mode introduced in [Cortina 8](https://github.com/ava-labs/avalanchego/releases/tag/v1.10.8)). The lower resource requirement in this "minimal mode" will provide Subnets with greater flexibility of validation hardware requirements as operators are not required to reserve any resources for C-Chain/X-Chain operation. If an SOV wishes to sync the entire Primary Network, they still can. ### Future Work The previously described specification is a minimal, additive change to Subnet Validation semantics that prepares the Avalanche Network for a more flexible Subnet model. It alone, however, fails to communicate this flexibility nor provides an alternative use of $AVAX that would have otherwise been used to create Subnet Validators. Below are two high-level ideas (Pay-As-You-Go Subnet Validation Registration Fees and $AVAX-Augmented Security) that highlight how this initial change could be extended in the future. If the Avalanche Community is interested in their adoption, they should each be proposed as a unique ACP where they can be properly specified. **These ideas are only suggestions for how the Avalanche Network could be modified in the future if this ACP is adopted. Supporting this ACP does not require supporting these ideas or committing to their rollout.** #### Pay-As-You-Go Subnet Validation Registration Fees _Transition Subnet Validator registration to a dynamically priced, continuously charged fee (that doesn't require locking large amounts of $AVAX upfront)._ While it would be possible to just transition to a lower required "lock" amount, many think that it would be more competitive to transition to a dynamically priced, continuous payment mechanism to register as a Subnet Validator. This new mechanism would target some $Y nAVAX fee that would be paid by each Subnet Validator per Subnet per second (pulling from a "Subnet Validator's Account") instead of requiring a large upfront lockup of $AVAX. The rate of nAVAX/second should be set by the demand for validating Subnets on Avalanche compared to some usage target per Subnet and across all Subnets. This rate should be locked for each Subnet Validation period to ensure operators are not subject to surprise costs if demand rises significantly over time. The optimization work outlined in [BLS Multi-Signature Voting](https://hackmd.io/@patrickogrady/100k-subnets#How-will-BLS-Multi-Signature-uptime-voting-work) should allow the min rate to be set as low as ~512-4096 nAVAX/second (or 1.3-10.6 $AVAX/month). Fees paid to the Avalanche Network for PAYG could be burned, like all other P-Chain, X-Chain, and C-Chain transactions, or they could be partially rewarded to Primary Network Validators as a "boost" over the existing staking rewards. The nice byproduct of the latter approach is that it better aligns Primary Network Validators with the growth of Subnets. #### $AVAX-Augmented Subnet Security _Allow pledging unstaked $AVAX to Subnet Validators on Elastic Subnets that can be slashed if said Subnet Validator commits an attributable fault (i.e. proposes/signs conflicting blocks/AWM payloads). Reward locked $AVAX associated with Subnet Validators that were not slashed with Elastic Subnet staking rewards._ Currently, the only way to secure an Elastic Subnet is to stake its custom staking token (defined in the `TransformSubnetTx`). Many have requested the option to use $AVAX for this token, however, this could easily allow an adversary to take over small Elastic Subnets (where the amount of $AVAX staked may be much less than the circulating supply). $AVAX-Augmented Subnet Security would allow anyone holding $AVAX to lock it to specific Subnet Validators and earn Elastic Subnet reward tokens for supporting honest participants. Recall, all stake management on the Avalanche Network (even for Subnets) occurs on the P-Chain. Thus, staked tokens ($AVAX and/or custom staking tokens used in Elastic Subnets) and stake weights (used for AWM verification) are secured by the full $AVAX stake of the Primary Network. $AVAX-Augmented Subnet Security, like staking, would be implemented on the P-Chain and enjoy the full security of the Primary Network. This approach means locking $AVAX occurs on the Primary Network (no need to transfer $AVAX to a Subnet, which may not be secured by meaningful value yet) and proofs of malicious behavior are processed on the Primary Network (a colluding Subnet could otherwise choose not to process a proof that would lead to their "lockers" being slashed). _This native approach is comparable to the idea of using $ETH to secure DA on [EigenLayer](https://www.eigenlayer.xyz/) (without reusing stake) or $BTC to secure Cosmos Zones on [Babylon](https://babylonchain.io/) (but not using an external ecosystem)._ ## Backwards Compatibility * Existing Subnet Validation semantics for Primary Network Validators are not modified by this ACP. This means that All existing Subnet Validators can continue validating both the Primary Network and whatever Subnets they are validating. This change would just provide a new option for Subnet Validators that allows them to sacrifice their staking rewards for a smaller upfront $AVAX commitment and lower infrastructure costs. * Support for this ACP would require adding a new transaction type to the P-Chain (i.e. `AddSubnetOnlyValidatorTx`). This new transaction is an execution-breaking change that would require a mandatory Avalanche Network upgrade to activate. ## Reference Implementation A full implementation will be provided once this ACP is considered `Implementable`. However, some initial ideas are presented below. ### `AddSubnetOnlyValidatorTx` ```text type AddSubnetOnlyValidatorTx struct { // Metadata, inputs and outputs BaseTx `serialize:"true"` // Describes the validator // The NodeID included in [Validator] must be the Ed25519 public key. Validator `serialize:"true" json:"validator"` // ID of the subnet this validator is validating Subnet ids.ID `serialize:"true" json:"subnetID"` // [Signer] is the BLS key for this validator. // Note: We do not enforce that the BLS key is unique across all validators. // This means that validators can share a key if they so choose. // However, a NodeID does uniquely map to a BLS key Signer signer.Signer `serialize:"true" json:"signer"` // Where to send locked tokens when done validating LockOuts []*avax.TransferableOutput `serialize:"true" json:"lock"` // Where to send validation rewards when done validating ValidatorRewardsOwner fx.Owner `serialize:"true" json:"validationRewardsOwner"` // Where to send delegation rewards when done validating DelegatorRewardsOwner fx.Owner `serialize:"true" json:"delegationRewardsOwner"` // Fee this validator charges delegators as a percentage, times 10,000 // For example, if this validator has DelegationShares=300,000 then they // take 30% of rewards from delegators DelegationShares uint32 `serialize:"true" json:"shares"` } ``` _`AddSubnetOnlyValidatorTx` is almost the same as [`AddPermissionlessValidatorTx`](https://github.com/ava-labs/avalanchego/blob/638000c42e5361e656ffbc27024026f6d8f67810/vms/platformvm/txs/add_permissionless_validator_tx.go#L33-L58), the only exception being that `StakeOuts` are now `LockOuts`._ ### `GetSubnetPeers` To support tracking SOV IPs, a new message should be added to the P2P specification that allows Subnet Validators to request the IP of all peers a node knows about on a Subnet (these Signed IPs won't be gossiped like they are for Primary Network Validators because they don't need to be known by the entire Avalanche Network): ```text message GetSubnetPeers { bytes subnet_id = 1; } ``` _It would be a nice addition if a bloom filter could also be provided here so that an ANC only sends IPs of peers that the original sender does not know._ ANCs should respond to this incoming message with a [`PeerList` message](https://github.com/ava-labs/avalanchego/blob/638000c42e5361e656ffbc27024026f6d8f67810/proto/p2p/p2p.proto#L135-L148). ## Security Considerations * Any Subnet Validator running in "Partial Sync Mode" will not be able to verify Atomic Imports on the P-Chain and will rely entirely on Primary Network consensus to only accept valid P-Chain blocks. * High-throughput Subnets will be better isolated from the Primary Network and should improve its resilience (i.e. surges of traffic on some Subnet cannot destabilize a Primary Network Validator). * Avalanche Network Clients (ANCs) must track IPs and provide allocated bandwidth for SOVs even though they are not Primary Network Validators. ## Open Questions * To help orient the Avalanche Community around this wide-ranging and likely to be long-running conversation around the relationship between the Primary Network and Subnets, should we come up with a project name to describe the effort? I've been casually referring to all of these things as the _Astra Upgrade Track_ but definitely up for discussion (may be more confusing than it is worth to do this). ## Appendix A draft of this ACP was posted on in the ["Ideas" Discussion Board](https://github.com/avalanche-foundation/ACPs/discussions/10#discussioncomment-7373486), as suggested by the [ACP README](https://github.com/avalanche-foundation/ACPs#step-1-post-your-idea-to-github-discussions). Feedback on this draft was collected and addressed on both the "Ideas" Discussion Board and on [HackMD](https://hackmd.io/@patrickogrady/100k-subnets#Feedback-to-Draft-Proposal). ## Acknowledgements Thanks to @luigidemeo1, @stephenbuttolph, @aaronbuchwald, @dhrubabasu, and @abi87 for their feedback on these ideas. ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-131: Cancun Eips (/docs/acps/131-cancun-eips) --- title: "ACP-131: Cancun Eips" description: "Details for Avalanche Community Proposal 131: Cancun Eips" edit_url: https://github.com/avalanche-foundation/ACPs/edit/main/ACPs/131-cancun-eips/README.md --- | ACP | 131 | | :--- | :--- | | **Title** | Activate Cancun EIPs on C-Chain and Subnet-EVM chains | | **Author(s)** | Darioush Jalali ([@darioush](https://github.com/darioush)), Ceyhun Onur ([@ceyonur](https://github.com/ceyonur)) | | **Status** | Activated ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/139)) | | **Track** | Standards, Subnet | ## Abstract Enable new EVM opcodes and opcode changes in accordance with the following EIPs on the Avalanche C-Chain and Subnet-EVM chains: - [EIP-4844: BLOBHASH opcode](https://eips.ethereum.org/EIPS/eip-4844) - [EIP-7516: BLOBBASEFEE opcode](https://eips.ethereum.org/EIPS/eip-7516) - [EIP-1153: Transient storage](https://eips.ethereum.org/EIPS/eip-1153) - [EIP-5656: MCOPY opcode](https://eips.ethereum.org/EIPS/eip-5656) - [EIP-6780: SELFDESTRUCT only in same transaction](https://eips.ethereum.org/EIPS/eip-6780) Note blob transactions from EIP-4844 are excluded and blocks containing them will still be considered invalid. ## Motivation The listed EIPs were activated on Ethereum mainnet as part of the [Cancun upgrade](https://github.com/ethereum/execution-specs/blob/master/network-upgrades/mainnet-upgrades/cancun.md#included-eips). This proposal is to activate them on the Avalanche C-Chain in the next network upgrade, to maintain compatibility with upstream EVM tooling, infrastructure, and developer experience (e.g., Solidity compiler defaults >= [0.8.25](https://github.com/ethereum/solidity/releases/tag/v0.8.25)). Additionally, it recommends the activation of the same EIPs on Subnet-EVM chains. ## Specification & Reference Implementation The opcodes (EVM exceution modifications) and block header modifications should be adopted as specified in the EIPs themselves. Other changes such as enabling new transaction types or mempool modifications are not in scope (specifically blob transactions from EIP-4844 are excluded and blocks containing them are considered invalid). ANCs (Avalanche Network Clients) can adopt the implementation as specified in the [coreth](https://github.com/ava-labs/coreth) repository, which was adopted from the [go-ethereum v1.13.8](https://github.com/ethereum/go-ethereum/releases/tag/v1.13.8) release in this [PR](https://github.com/ava-labs/coreth/pull/550). In particular, note the following code: - [Activation of new opcodes](https://github.com/ava-labs/coreth/blob/7b875dc21772c1bb9e9de5bc2b31e88c53055e26/core/vm/jump_table.go#L93) - Activation of Cancun in next Avalanche upgrade: - [C-Chain](https://github.com/ava-labs/coreth/pull/610) - [Subnet-EVM chains](https://github.com/ava-labs/subnet-evm/blob/fa909031ed148484c5072d949c5ed73d915ce1ed/params/config_extra.go#L186) - `ParentBeaconRoot` is enforced to be included and the zero value [here](https://github.com/ava-labs/coreth/blob/7b875dc21772c1bb9e9de5bc2b31e88c53055e26/plugin/evm/block_verification.go#L287-L288). This field is retained for future use and compatibility with upstream tooling. - Forbids blob transactions by enforcing `BlobGasUsed` to be 0 [here](https://github.com/ava-labs/coreth/pull/611/files#diff-532a2c6a5365d863807de5b435d8d6475552904679fd611b1b4b10d3bf4f5010R267). _Note:_ Subnets are sovereign in regards to their validator set and state transition rules, and can choose to opt out of this proposal by making a code change in their respective Subnet-EVM client. ## Backwards Compatibility The original EIP authors highlighted the following considerations. For full details, refer to the original EIPs: - [EIP-4844](https://eips.ethereum.org/EIPS/eip-4844#backwards-compatibility): Blob transactions are not proposed to be enabled on Avalanche, so concerns related to mempool or transaction data availability are not applicable. - [EIP-6780](https://eips.ethereum.org/EIPS/eip-6780#backwards-compatibility) "Contracts that depended on re-deploying contracts at the same address using CREATE2 (after a SELFDESTRUCT) will no longer function properly if the created contract does not call SELFDESTRUCT within the same transaction." Adoption of this ACP modifies consensus rules for the C-Chain, therefore it requires a network upgrade. It is recommended that Subnet-EVM chains also adopt this ACP and follow the same upgrade time as Avalanche's next network upgrade. ## Security Considerations Refer to the original EIPs for security considerations: - [EIP 1153](https://eips.ethereum.org/EIPS/eip-1153#security-considerations) - [EIP 4788](https://eips.ethereum.org/EIPS/eip-4788#security-considerations) - [EIP 4844](https://eips.ethereum.org/EIPS/eip-4844#security-considerations) - [EIP 5656](https://eips.ethereum.org/EIPS/eip-5656#security-considerations) - [EIP 6780](https://eips.ethereum.org/EIPS/eip-6780#security-considerations) - [EIP 7516](https://eips.ethereum.org/EIPS/eip-7516#security-considerations) ## Open Questions No open questions. ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-151: Use Current Block Pchain Height As Context (/docs/acps/151-use-current-block-pchain-height-as-context) --- title: "ACP-151: Use Current Block Pchain Height As Context" description: "Details for Avalanche Community Proposal 151: Use Current Block Pchain Height As Context" edit_url: https://github.com/avalanche-foundation/ACPs/edit/main/ACPs/151-use-current-block-pchain-height-as-context/README.md --- | ACP | 151 | | :------------ | :----------------------------------------------------------------------------------------- | | **Title** | Use current block P-Chain height as context for state verification | | **Author(s)** | Ian Suvak ([@iansuvak](https://github.com/iansuvak)) | | **Status** | Activated ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/152)) | | **Track** | Standards | ## Abstract Proposes that the ProposerVM passes inner VMs the P-Chain block height of the current block being built rather than the P-Chain block height of the parent block. Inner VMs use this P-Chain height for verifying aggregated signatures of Avalanche Interchain Messages (ICM). This will allow for a more reliable way to determine which validators should participate in signing the message, and remove unnecessary waiting periods. ## Motivation Currently the ProposerVM passes the P-Chain height of the parent block to inner VMs, which use the value to verify ICM messages in the current block. Using the parent block's P-Chain height is necessary for verifying the proposer and reaching consensus on the current block, but it is not necessary for verifying ICM messages within the block. Using the P-Chain height of the current block being built would make operations using ICM messages to modify the validator set, such as ones specified in [ACP-77](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/77-reinventing-subnets/README.md) be verifiable sooner and more reliably. Currently at least two new P-Chain blocks need to be produced after the relevant state change for it to be reflected for purposes of ICM aggregate signature verification. ## Specification The [block context](https://github.com/ava-labs/avalanchego/blob/d2e9d12ed2a1b6581b8fd414cbfb89a6cfa64551/snow/engine/snowman/block/block_context_vm.go#L14) contains a `PChainHeight` field that is passed from the ProposerVM to the inner VMs building the block. It is later used by the inner VMs to fetch the canonical validator set for verification of ICM aggregated signatures. The `PChainHeight` currently passed in by the ProposerVM is the P-Chain height of the parent block. The proposed change is to instead have the ProposerVM pass in the P-Chain height of the current block. ## Backwards Compatibility This change requires an upgrade to make sure that all validators verifying the validity of the ICM messages use the same P-Chain height and therefore the same validator set. Prior to activation nodes should continue to use P-Chain height of the parent block. ## Reference Implementation An implementation of this ACP for avalanchego can be found [here](https://github.com/ava-labs/avalanchego/pull/3459) ## Security Considerations ProposerVM needs to use the parent block's P-Chain height to verify proposers for security reasons but we don't have such restrictions for verifying ICM message validity in the current block being built. Therefore, this should be a safe change. ## Acknowledgments Thanks to [@StephenButtolph](https://github.com/StephenButtolph) and [@michaelkaplan13](https://github.com/michaelkaplan13) for discussion and feedback on this ACP. ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-176: Dynamic Evm Gas Limit And Price Discovery Updates (/docs/acps/176-dynamic-evm-gas-limit-and-price-discovery-updates) --- title: "ACP-176: Dynamic Evm Gas Limit And Price Discovery Updates" description: "Details for Avalanche Community Proposal 176: Dynamic Evm Gas Limit And Price Discovery Updates" edit_url: https://github.com/avalanche-foundation/ACPs/edit/main/ACPs/176-dynamic-evm-gas-limit-and-price-discovery-updates/README.md --- | ACP | 176 | | :- | :- | | **Title** | Dynamic EVM Gas Limits and Price Discovery Updates | | **Author(s)** | Stephen Buttolph ([@StephenButtolph](https://github.com/StephenButtolph)), Michael Kaplan ([@michaelkaplan13](https://github.com/michaelkaplan13)) | | **Status** | Activated ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/178)) | | **Track** | Standards | ## Abstract Proposes that the C-Chain and Subnet-EVM chains adopt a dynamic fee mechanism similar to the one [introduced on the P-Chain as part of ACP-103](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/103-dynamic-fees/README.md), with modifications to allow for block proposers (i.e. validators) to dynamically adjust the target gas consumption per unit time. ## Motivation Currently, the C-Chain has a static gas target of [15,000,000 gas](https://github.com/ava-labs/coreth/blob/39ec874505b42a44e452b8809a2cc6d09098e84e/params/avalanche_params.go#L32) per [10 second rolling window](https://github.com/ava-labs/coreth/blob/39ec874505b42a44e452b8809a2cc6d09098e84e/params/avalanche_params.go#L36), and uses a modified version of the [EIP-1559](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-1559.md) dynamic fee mechanism to adjust the base fee of blocks based on the gas consumed in the previous 10 second window. This has two notable drawbacks: 1. The windower mechanism used to determine the base fee of blocks can lead to outsized spikes in the gas price when there is a large block. This is because after a large block that uses all of its gas limit, blocks that follow in the same window continue to result in increased gas prices even if they are relatively small blocks that are under the target gas consumption. 2. The static gas target necessitates a required network upgrade in order to modify. This is cumbersome and makes it difficult for the network to adjust its capacity in response to performance optimizations or hardware requirement increases. To better position Avalanche EVM chains, including the C-Chain, to be able to handle future increases in load, we propose replacing the above mechanism with one that better handles blocks that consume a large amount of gas, and that allows for validators to dynamically adjust the target rate of consumption. ## Specification ### Gas Price Determination The mechanism to determine the base fee of a block is the same as the one used in [ACP-103](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/103-dynamic-fees/README.md) to determine the gas price of a block on the P-Chain. This mechanism calculates the gas price for a given block $b$ based on the following parameters:
---
We introduce "acceptance proofs", so that a peer can verify any block accepted by consensus. In the aforementioned use-case, if a P-Chain block is unknown by a peer, it can request the block and proof at the provided height from a peer. If a block's proof is valid, the block can be executed to advance the local P-Chain and verify the proposed subnet block. Peers can request blocks from any peer without requiring consensus locally or communication with a validator. This has the added benefit of reducing the number of required connections and p2p message load served by P-Chain validators.
---
Figure 2: A Validator is verifying a subnet’s block `Z` which references an unknown P-Chain block `C` in its block header
Figure 3: A Validator requests the blocks and proofs for `B` and `C` from a peer
Figure 4: The Validator accepts the P-Chain blocks and is now able to verify `Z`
---
## Specification
Note: The following is pseudocode.
### P2P
#### Aggregation
```diff
+ message GetAcceptanceSignatureRequest {
+ bytes chain_id = 1;
+ uint32 request_id = 2;
+ bytes block_id = 3;
+ }
```
The `GetAcceptanceSignatureRequest` message is sent to a peer to request their signature for a given block id.
```diff
+ message GetAcceptanceSignatureResponse {
+ bytes chain_id = 1;
+ uint32 request_id = 2;
+ bytes bls_signature = 3;
+ }
```
`GetAcceptanceSignatureResponse` is sent to a peer as a response for `GetAcceptanceSignatureRequest`. `bls_signature` is the peer’s signature using their registered primary network BLS staking key over the requested `block_id`. An empty `bls_signature` field indicates that the block was not accepted yet.
## Security Considerations
Nodes that bootstrap using state sync may not have the entire history of the
P-Chain and therefore will not be able to provide the entire history for a block
that is referenced in a block that they propose. This would be needed to unblock a node that is attempting to fast-forward their P-Chain, as they require the entire ancestry between their current accepted tip and the block they are attempting to forward to. It is assumed that nodes will have some minimum amount of recent state so that the requester can eventually be unblocked by retrying, as only one node with the requested ancestry is required to unblock the requester.
An alternative is to make a churn assumption and validate the proposed block's proof with a stale validator set to avoid complexity, but this introduces more security concerns.
## Copyright
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
# ACP-77: Reinventing Subnets (/docs/acps/77-reinventing-subnets)
---
title: "ACP-77: Reinventing Subnets"
description: "Details for Avalanche Community Proposal 77: Reinventing Subnets"
edit_url: https://github.com/avalanche-foundation/ACPs/edit/main/ACPs/77-reinventing-subnets/README.md
---
| ACP | 77 |
| :------------ | :---------------------------------------------------------------------------------------- |
| **Title** | Reinventing Subnets |
| **Author(s)** | Dhruba Basu ([@dhrubabasu](https://github.com/dhrubabasu)) |
| **Status** | Activated ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/78)) |
| **Track** | Standards |
| **Replaces** | [ACP-13](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/13-subnet-only-validators/README.md) |
## Abstract
Overhaul Subnet creation and management to unlock increased flexibility for Subnet creators by:
- Separating Subnet validators from Primary Network validators (Primary Network Partial Sync, Removal of 2000 $AVAX requirement)
- Moving ownership of Subnet validator set management from P-Chain to Subnets (ERC-20/ERC-721/Arbitrary Staking, Staking Reward Management)
- Introducing a continuous P-Chain fee mechanism for Subnet validators (Continuous Subnet Staking)
This ACP supersedes [ACP-13](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/13-subnet-only-validators/README.md) and borrows some of its language.
## Motivation
Each node operator must stake at least 2000 $AVAX ($70k at time of writing) to first become a Primary Network validator before they qualify to become a Subnet validator. Most Subnets aim to launch with at least 8 Subnet validators, which requires staking 16000 $AVAX ($560k at time of writing). All Subnet validators, to satisfy their role as Primary Network validators, must also [allocate 8 AWS vCPU, 16 GB RAM, and 1 TB storage](https://github.com/ava-labs/avalanchego/blob/master/README.md#installation) to sync the entire Primary Network (X-Chain, P-Chain, and C-Chain) and participate in its consensus, in addition to whatever resources are required for each Subnet they are validating.
Regulated entities that are prohibited from validating permissionless, smart contract-enabled blockchains (like the C-Chain) cannot launch a Subnet because they cannot opt-out of Primary Network Validation. This deployment blocker prevents a large cohort of Real World Asset (RWA) issuers from bringing unique, valuable tokens to the Avalanche Ecosystem (that could move between C-Chain <-> Subnets using Avalanche Warp Messaging/Teleporter).
A widely validated Subnet that is not properly metered could destabilize the Primary Network if usage spikes unexpectedly. Underprovisioned Primary Network validators running such a Subnet may exit with an OOM exception, see degraded disk performance, or find it difficult to allocate CPU time to P/X/C-Chain validation. The inverse also holds for Subnets with the Primary Network (where some undefined behavior could bring a Subnet offline).
Although the fee paid to the Primary Network to operate a Subnet does not go up with the amount of activity on the Subnet, the fixed, upfront cost of setting up a Subnet validator on the Primary Network deters new projects that prefer smaller, even variable, costs until demand is observed. _Unlike L2s that pay some increasing fee (usually denominated in units per transaction byte) to an external chain for data availability and security as activity scales, Subnets provide their own security/data availability and the only cost operators must pay from processing more activity is the hardware cost of supporting additional load._
Elastic Subnets, introduced in [Banff](https://medium.com/avalancheavax/banff-elastic-subnets-44042f41e34c), enabled Subnet creators to activate Proof-of-Stake validation and uptime-based rewards using their own token. However, this token is required to be an ANT (created on the X-Chain) and locked on the P-Chain. All staking rewards were distributed on the P-Chain with the reward curve being defined in the `TransformSubnetTx` and, once set, was unable to be modified.
With no Elastic Subnets live on Mainnet, it is clear that Permissionless Subnets as they stand today could be more desirable. There are many successful Permissioned Subnets in production but many Subnet creators have raised the above as points of concern. In summary, the Avalanche community could benefit from a more flexible and affordable mechanism to launch Permissionless Subnets.
### A Note on Nomenclature
Avalanche Subnets are subnetworks validated by a subset of the Primary Network validator set. The new network creation flow outlined in this ACP does not require any intersection between the new network's validator set and the Primary Network's validator set. Moreover, the new networks have greater functionality and sovereignty than Subnets. To distinguish between these two kinds of networks, the community has been referring to these new networks as _Avalanche Layer 1s_, or L1s for short.
All networks created through the old network creation flow will continue to be referred to as Avalanche Subnets.
## Specification
At a high-level, L1s can manage their validator sets externally to the P-Chain by setting the blockchain ID and address of their _validator manager_. The P-Chain will consume Warp messages that modify the L1's validator set. To confirm modification of the L1's validator set, the P-Chain will also produce Warp messages. L1 validators are not required to validate the Primary Network, and do not have the same 2000 $AVAX stake requirement that Subnet validators have. To maintain an active L1 validator, a continuous fee denominated in $AVAX is assessed. L1 validators are only required to sync the P-Chain (not X/C-Chain) in order to track validator set changes and support cross-L1 communication.
### P-Chain Warp Message Payloads
To enable management of an L1's validator set externally to the P-Chain, Warp message verification will be added to the [`PlatformVM`](https://github.com/ava-labs/avalanchego/tree/master/vms/platformvm). For a Warp message to be considered valid by the P-Chain, at least 67% of the `sourceChainID`'s weight must have participated in the aggregate BLS signature. This is equivalent to the threshold set for the C-Chain. A future ACP may be proposed to support modification of this threshold on a per-L1 basis.
The following Warp message payloads are introduced on the P-Chain:
- `SubnetToL1ConversionMessage`
- `RegisterL1ValidatorMessage`
- `L1ValidatorRegistrationMessage`
- `L1ValidatorWeightMessage`
The method of requesting signatures for these messages is left unspecified. A viable option for supporting this functionality is laid out in [ACP-118](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/118-warp-signature-request/README.md) with the `SignatureRequest` message.
All node IDs contained within the message specifications are represented as variable length arrays such that they can support new node IDs types should the P-Chain add support for them in the future.
The serialization of each of these messages is as follows.
#### `SubnetToL1ConversionMessage`
The P-Chain can produce a `SubnetToL1ConversionMessage` for consumers (i.e. validator managers) to be aware of the initial validator set.
The following serialization is defined as the `ValidatorData`:
| Field | Type | Size |
| -------------: | ---------: | -----------------------: |
| `nodeID` | `[]byte` | 4 + len(`nodeID`) bytes |
| `blsPublicKey` | `[48]byte` | 48 bytes |
| `weight` | `uint64` | 8 bytes |
| | | 60 + len(`nodeID`) bytes |
The following serialization is defined as the `ConversionData`:
| Field | Type | Size |
| ---------------: | ----------------: | ---------------------------------------------------------: |
| `codecID` | `uint16` | 2 bytes |
| `subnetID` | `[32]byte` | 32 bytes |
| `managerChainID` | `[32]byte` | 32 bytes |
| `managerAddress` | `[]byte` | 4 + len(`managerAddress`) bytes |
| `validators` | `[]ValidatorData` | 4 + sum(`validatorLengths`) bytes |
| | | 74 + len(`managerAddress`) + sum(`validatorLengths`) bytes |
- `codecID` is the codec version used to serialize the payload, and is hardcoded to `0x0000`
- `sum(validatorLengths)` is the sum of the lengths of `ValidatorData` serializations included in `validators`.
- `subnetID` identifies the Subnet that is being converted to an L1 (described further below).
- `managerChainID` and `managerAddress` identify the validator manager for the newly created L1. This is the (blockchain ID, address) tuple allowed to send Warp messages to modify the L1's validator set.
- `validators` are the initial continuous-fee-paying validators for the given L1.
The `SubnetToL1ConversionMessage` is specified as an `AddressedCall` with `sourceChainID` set to the P-Chain ID, the `sourceAddress` set to an empty byte array, and a payload of:
| Field | Type | Size |
| -------------: | ---------: | -------: |
| `codecID` | `uint16` | 2 bytes |
| `typeID` | `uint32` | 4 bytes |
| `conversionID` | `[32]byte` | 32 bytes |
| | | 38 bytes |
- `codecID` is the codec version used to serialize the payload, and is hardcoded to `0x0000`
- `typeID` is the payload type identifier and is `0x00000000` for this message
- `conversionID` is the SHA256 hash of the `ConversionData` from a given `ConvertSubnetToL1Tx`
#### `RegisterL1ValidatorMessage`
The P-Chain can consume a `RegisterL1ValidatorMessage` from validator managers through a `RegisterL1ValidatorTx` to register an addition to the L1's validator set.
The following is the serialization of a `PChainOwner`:
| Field | Type | Size |
| ----------: | -----------: | -------------------------------: |
| `threshold` | `uint32` | 4 bytes |
| `addresses` | `[][20]byte` | 4 + len(`addresses`) \\* 20 bytes |
| | | 8 + len(`addresses`) \\* 20 bytes |
- `threshold` is the number of `addresses` that must provide a signature for the `PChainOwner` to authorize an action.
- Validation criteria:
- If `threshold` is `0`, `addresses` must be empty
- `threshold` <= len(`addresses`)
- Entries of `addresses` must be unique and sorted in ascending order
The `RegisterL1ValidatorMessage` is specified as an `AddressedCall` with a payload of:
| Field | Type | Size |
| ----------------------: | ------------: | ------------------------------------------------------------------------: |
| `codecID` | `uint16` | 2 bytes |
| `typeID` | `uint32` | 4 bytes |
| `subnetID` | `[32]byte` | 32 bytes |
| `nodeID` | `[]byte` | 4 + len(`nodeID`) bytes |
| `blsPublicKey` | `[48]byte` | 48 bytes |
| `expiry` | `uint64` | 8 bytes |
| `remainingBalanceOwner` | `PChainOwner` | 8 + len(`addresses`) \\* 20 bytes |
| `disableOwner` | `PChainOwner` | 8 + len(`addresses`) \\* 20 bytes |
| `weight` | `uint64` | 8 bytes |
| | | 122 + len(`nodeID`) + (len(`addresses1`) + len(`addresses2`)) \\* 20 bytes |
- `codecID` is the codec version used to serialize the payload, and is hardcoded to `0x0000`
- `typeID` is the payload type identifier and is `0x00000001` for this payload
- `subnetID`, `nodeID`, `weight`, and `blsPublicKey` are for the validator being added
- `expiry` is the time at which this message becomes invalid. As of a P-Chain timestamp `>= expiry`, this Avalanche Warp Message can no longer be used to add the `nodeID` to the validator set of `subnetID`
- `remainingBalanceOwner` is the P-Chain owner where leftover $AVAX from the validator's Balance will be issued to when this validator it is removed from the validator set.
- `disableOwner` is the only P-Chain owner allowed to disable the validator using `DisableL1ValidatorTx`, specified below.
#### `L1ValidatorRegistrationMessage`
The P-Chain can produce an `L1ValidatorRegistrationMessage` for consumers to verify that a validation period has either begun or has been invalidated.
The `L1ValidatorRegistrationMessage` is specified as an `AddressedCall` with `sourceChainID` set to the P-Chain ID, the `sourceAddress` set to an empty byte array, and a payload of:
| Field | Type | Size |
| -------------: | ---------: | -------: |
| `codecID` | `uint16` | 2 bytes |
| `typeID` | `uint32` | 4 bytes |
| `validationID` | `[32]byte` | 32 bytes |
| `registered` | `bool` | 1 byte |
| | | 39 bytes |
- `codecID` is the codec version used to serialize the payload, and is hardcoded to `0x0000`
- `typeID` is the payload type identifier and is `0x00000002` for this message
- `validationID` identifies the validator for the message
- `registered` is a boolean representing the status of the `validationID`. If true, the `validationID` corresponds to a validator in the current validator set. If false, the `validationID` does not correspond to a validator in the current validator set, and never will in the future.
#### `L1ValidatorWeightMessage`
The P-Chain can consume an `L1ValidatorWeightMessage` through a `SetL1ValidatorWeightTx` to update the weight of an existing validator. The P-Chain can also produce an `L1ValidatorWeightMessage` for consumers to verify that the validator weight update has been effectuated.
The `L1ValidatorWeightMessage` is specified as an `AddressedCall` with the following payload. When sent from the P-Chain, the `sourceChainID` is set to the P-Chain ID, and the `sourceAddress` is set to an empty byte array.
| Field | Type | Size |
| -------------: | ---------: | -------: |
| `codecID` | `uint16` | 2 bytes |
| `typeID` | `uint32` | 4 bytes |
| `validationID` | `[32]byte` | 32 bytes |
| `nonce` | `uint64` | 8 bytes |
| `weight` | `uint64` | 8 bytes |
| | | 54 bytes |
- `codecID` is the codec version used to serialize the payload, and is hardcoded to `0x0000`
- `typeID` is the payload type identifier and is `0x00000003` for this message
- `validationID` identifies the validator for the message
- `nonce` is a strictly increasing number that denotes the latest validator weight update and provides replay protection for this transaction
- `weight` is the new `weight` of the validator
### New P-Chain Transaction Types
Both before and after this ACP, to create a Subnet, a `CreateSubnetTx` must be issued on the P-Chain. This transaction includes an `Owner` field which defines the key that today can be used to authorize any validator set additions (`AddSubnetValidatorTx`) or removals (`RemoveSubnetValidatorTx`).
To be considered a permissionless network, or Avalanche Layer 1:
- This `Owner` key must no longer have the ability to modify the validator set.
- New transaction types must support modification of the validator set via Warp messages.
The following new transaction types are introduced on the P-Chain to support this functionality:
- `ConvertSubnetToL1Tx`
- `RegisterL1ValidatorTx`
- `SetL1ValidatorWeightTx`
- `DisableL1ValidatorTx`
- `IncreaseL1ValidatorBalanceTx`
#### `ConvertSubnetToL1Tx`
To convert a Subnet into an L1, a `ConvertSubnetToL1Tx` must be issued to set the `(chainID, address)` pair that will manage the L1's validator set. The `Owner` key defined in `CreateSubnetTx` must provide a signature to authorize this conversion.
The `ConvertSubnetToL1Tx` specification is:
```go
type PChainOwner struct {
// The threshold number of `Addresses` that must provide a signature in order for
// the `PChainOwner` to be considered valid.
Threshold uint32 `json:"threshold"`
// The 20-byte addresses that are allowed to sign to authenticate a `PChainOwner`.
// Note: It is required for:
// - len(Addresses) == 0 if `Threshold` is 0.
// - len(Addresses) >= `Threshold`
// - The values in Addresses to be sorted in ascending order.
Addresses []ids.ShortID `json:"addresses"`
}
type L1Validator struct {
// NodeID of this validator
NodeID []byte `json:"nodeID"`
// Weight of this validator used when sampling
Weight uint64 `json:"weight"`
// Initial balance for this validator
Balance uint64 `json:"balance"`
// [Signer] is the BLS public key and proof-of-possession for this validator.
// Note: We do not enforce that the BLS key is unique across all validators.
// This means that validators can share a key if they so choose.
// However, a NodeID + L1 does uniquely map to a BLS key
Signer signer.ProofOfPossession `json:"signer"`
// Leftover $AVAX from the [Balance] will be issued to this
// owner once it is removed from the validator set.
RemainingBalanceOwner PChainOwner `json:"remainingBalanceOwner"`
// The only owner allowed to disable this validator on the P-Chain.
DisableOwner PChainOwner `json:"disableOwner"`
}
type ConvertSubnetToL1Tx struct {
// Metadata, inputs and outputs
BaseTx
// ID of the Subnet to transform
// Restrictions:
// - Must not be the Primary Network ID
Subnet ids.ID `json:"subnetID"`
// BlockchainID where the validator manager lives
ChainID ids.ID `json:"chainID"`
// Address of the validator manager
Address []byte `json:"address"`
// Initial continuous-fee-paying validators for the L1
Validators []L1Validator `json:"validators"`
// Authorizes this conversion
SubnetAuth verify.Verifiable `json:"subnetAuthorization"`
}
```
After this transaction is accepted, `CreateChainTx` and `AddSubnetValidatorTx` are disabled on the Subnet. The only action that the `Owner` key is able to take is removing Subnet validators with `RemoveSubnetValidatorTx` that had been added using `AddSubnetValidatorTx`. Unless removed by the `Owner` key, any Subnet validators added previously with an `AddSubnetValidatorTx` will continue to validate the Subnet until their [`End`](https://github.com/ava-labs/avalanchego/blob/a1721541754f8ee23502b456af86fea8c766352a/vms/platformvm/txs/validator.go#L27) time is reached. Once all Subnet validators added with `AddSubnetValidatorTx` are no longer in the validator set, the `Owner` key is powerless. `RegisterL1ValidatorTx` and `SetL1ValidatorWeightTx` must be used to manage the L1's validator set.
The `validationID` for validators added through `ConvertSubnetToL1Tx` is defined as the SHA256 hash of the 36 bytes resulting from concatenating the 32 byte `subnetID` with the 4 byte `validatorIndex` (index in the `Validators` array within the transaction).
Once this transaction is accepted, the P-Chain must be willing sign a `SubnetToL1ConversionMessage` with a `conversionID` corresponding to `ConversionData` populated with the values from this transaction.
#### `RegisterL1ValidatorTx`
After a `ConvertSubnetToL1Tx` has been accepted, new validators can only be added by using a `RegisterL1ValidatorTx`. The specification of this transaction is:
```go
type RegisterL1ValidatorTx struct {
// Metadata, inputs and outputs
BaseTx
// Balance <= sum($AVAX inputs) - sum($AVAX outputs) - TxFee.
Balance uint64 `json:"balance"`
// [Signer] is a BLS signature proving ownership of the BLS public key specified
// below in `Message` for this validator.
// Note: We do not enforce that the BLS key is unique across all validators.
// This means that validators can share a key if they so choose.
// However, a NodeID + L1 does uniquely map to a BLS key
Signer [96]byte `json:"signer"`
// A RegisterL1ValidatorMessage payload
Message warp.Message `json:"message"`
}
```
The `validationID` of validators added via `RegisterL1ValidatorTx` is defined as the SHA256 hash of the `Payload` of the `AddressedCall` in `Message`.
When a `RegisterL1ValidatorTx` is accepted on the P-Chain, the validator is added to the L1's validator set. A `minNonce` field corresponding to the `validationID` will be stored on addition to the validator set (initially set to `0`). This field will be used when validating the `SetL1ValidatorWeightTx` defined below.
This `validationID` will be used for replay protection. Used `validationID`s will be stored on the P-Chain. If a `RegisterL1ValidatorTx`'s `validationID` has already been used, the transaction will be considered invalid. To prevent storing an unbounded number of `validationID`s, the `expiry` of the `RegisterL1ValidatorMessage` is required to be no more than 24 hours in the future of the time the transaction is issued on the P-Chain. Any `validationIDs` corresponding to an expired timestamp can be flushed from the P-Chain's state.
L1s are responsible for defining the procedure on how to retrieve the above information from prospective validators.
An EVM-compatible L1 may choose to implement this step like so:
- Use the number of tokens the user has staked into a smart contract on the L1 to determine the weight of their validator
- Require the user to submit an on-chain transaction with their validator information
- Generate the Warp message
For a `RegisterL1ValidatorTx` to be valid, `Signer` must be a valid proof-of-possession of the `blsPublicKey` defined in the `RegisterL1ValidatorMessage` contained in the transaction.
After a `RegisterL1ValidatorTx` is accepted, the P-Chain must be willing to sign an `L1ValidatorRegistrationMessage` for the given `validationID` with `registered` set to `true`. This remains the case until the time at which the validator is removed from the validator set using a `SetL1ValidatorWeightTx`, as described below.
When it is known that a given `validationID` _is not and never will be_ registered, the P-Chain must be willing to sign an `L1ValidatorRegistrationMessage` for the `validationID` with `registered` set to `false`. This could be the case if the `expiry` time of the message has passed prior to the message being delivered in a `RegisterL1ValidatorTx`, or if the validator was successfully registered and then later removed. This enables the P-Chain to prove to validator managers that a validator has been removed or never added. The P-Chain must refuse to sign any `L1ValidatorRegistrationMessage` where the `validationID` does not correspond to an active validator and the `expiry` is in the future.
#### `SetL1ValidatorWeightTx`
`SetL1ValidatorWeightTx` is used to modify the voting weight of a validator. The specification of this transaction is:
```go
type SetL1ValidatorWeightTx struct {
// Metadata, inputs and outputs
BaseTx
// An L1ValidatorWeightMessage payload
Message warp.Message `json:"message"`
}
```
Applications of this transaction could include:
- Increase the voting weight of a validator if a delegation is made on the L1
- Increase the voting weight of a validator if the stake amount is increased (by staking rewards for example)
- Decrease the voting weight of a misbehaving validator
- Remove an inactive validator
The validation criteria for `L1ValidatorWeightMessage` is:
- `nonce >= minNonce`. Note that `nonce` is not required to be incremented by `1` with each successive validator weight update.
- When `minNonce == MaxUint64`, `nonce` must be `MaxUint64` and `weight` must be `0`. This prevents L1s from being unable to remove `nodeID` in a subsequent transaction.
- If `weight == 0`, the validator being removed must not be the last one in the set. If all validators are removed, there are no valid Warp messages that can be produced to register new validators through `RegisterL1ValidatorMessage`. With no validators, block production will halt and the L1 is unrecoverable. This validation criteria serves as a guardrail against this situation. A future ACP can remove this guardrail as users get more familiar with the new L1 mechanics and tooling matures to fork an L1.
When `weight != 0`, the weight of the validator is updated to `weight` and `minNonce` is updated to `nonce + 1`.
When `weight == 0`, the validator is removed from the validator set. All state related to the validator, including the `minNonce` and `validationID`, are reaped from the P-Chain state. Tracking these post-removal is not required since `validationID` can never be re-initialized due to the replay protection provided by `expiry` in `RegisterL1ValidatorTx`. Any unspent $AVAX in the validator's `Balance` will be issued in a single UTXO to the `RemainingBalanceOwner` for this validator. Recall that `RemainingBalanceOwner` is specified when the validator is first added to the L1's validator set (in either `ConvertSubnetToL1Tx` or `RegisterL1ValidatorTx`).
Note: There is no explicit `EndTime` for L1 validators added in a `ConvertSubnetToL1Tx` or `RegisterL1ValidatorTx`. The only time when L1 validators are removed from the L1's validator set is through this transaction when `weight == 0`.
#### `DisableL1ValidatorTx`
L1 validators can use `DisableL1ValidatorTx` to mark their validator as inactive. The specification of this transaction is:
```go
type DisableL1ValidatorTx struct {
// Metadata, inputs and outputs
BaseTx
// ID corresponding to the validator
ValidationID ids.ID `json:"validationID"`
// Authorizes this validator to be disabled
DisableAuth verify.Verifiable `json:"disableAuthorization"`
}
```
The `DisableOwner` specified for this validator must sign the transaction. Any unspent $AVAX in the validator's `Balance` will be issued in a single UTXO to the `RemainingBalanceOwner` for this validator. Recall that both `DisableOwner` and `RemainingBalanceOwner` are specified when the validator is first added to the L1's validator set (in either `ConvertSubnetToL1Tx` or `RegisterL1ValidatorTx`).
For full removal from an L1's validator set, a `SetL1ValidatorWeightTx` must be issued with weight `0`. To do so, a Warp message is required from the L1's validator manager. However, to support the ability to claim the unspent `Balance` for a validator without authorization is critical for failed L1s.
Note that this does not modify an L1's total staking weight. This transaction marks the validator as inactive, but does not remove it from the L1's validator set. Inactive validators can re-activate at any time by increasing their balance with an `IncreaseL1ValidatorBalanceTx`.
L1 creators should be aware that there is no notion of `MinStakeDuration` that is enforced by the P-Chain. It is expected that L1s who choose to enforce a `MinStakeDuration` will lock the validator's Stake for the L1's desired `MinStakeDuration`.
#### `IncreaseL1ValidatorBalanceTx`
L1 validators are required to maintain a non-zero balance used to pay the continuous fee on the P-Chain in order to be considered active. The `IncreaseL1ValidatorBalanceTx` can be used by anybody to add additional $AVAX to the `Balance` to a validator. The specification of this transaction is:
```go
type IncreaseL1ValidatorBalanceTx struct {
// Metadata, inputs and outputs
BaseTx
// ID corresponding to the validator
ValidationID ids.ID `json:"validationID"`
// Balance <= sum($AVAX inputs) - sum($AVAX outputs) - TxFee
Balance uint64 `json:"balance"`
}
```
If the validator corresponding to `ValidationID` is currently inactive (`Balance` was exhausted or `DisableL1ValidatorTx` was issued), this transaction will move them back to the active validator set.
Note: The $AVAX added to `Balance` can be claimed at any time by the validator using `DisableL1ValidatorTx`.
### Bootstrapping L1 Nodes
Bootstrapping a node/validator is the process of securely recreating the latest state of the blockchain locally. At the end of this process, the local state of a node/validator must be in sync with the local state of other virtuous nodes/validators. The node/validator can then verify new incoming transactions and reach consensus with other nodes/validators.
To bootstrap a node/validator, a few critical questions must be answered: How does one discover peers in the network? How does one determine that a discovered peer is honestly participating in the network?
For standalone networks like the Avalanche Primary Network, this is done by connecting to a hardcoded [set](https://github.com/ava-labs/avalanchego/blob/master/genesis/bootstrappers.json) of trusted bootstrappers to then discover new peers. Ethereum calls their set [bootnodes](https://ethereum.org/developers/docs/nodes-and-clients/bootnodes).
Since L1 validators are not required to be Primary Network validators, a list of validator IPs to connect to (the functional bootstrappers of the L1) cannot be provided by simply connecting to the Primary Network validators. However, the Primary Network can enable nodes tracking an L1 to seamlessly connect to the validators by tracking and gossiping L1 validator IPs. L1s will not need to operate and maintain a set of bootstrappers and can rely on the Primary Network for peer discovery.
### Sidebar: L1 Sovereignty
After this ACP is activated, the P-Chain will no longer support staking of any assets other than $AVAX for the Primary Network. The P-Chain will not support the distribution of staking rewards for L1s. All staking-related operations for L1 validation must be managed by the L1's validator manager. The P-Chain simply requires a continuous fee per validator. If an L1 would like to manage their validator's balances on the P-Chain, it can cover the cost for all L1 validators by posting the $AVAX balance on the P-Chain. L1s can implement any mechanism they want to pay the continuous fee charged by the P-Chain for its participants.
The L1 has full ownership over its validator set, not the P-Chain. There are no restrictions on what requirements an L1 can have for validators to join. Any stake that is required to join the L1's validator set is not locked on the P-Chain. If a validator is removed from the L1's validator set via a `SetL1ValidatorWeightTx` with weight `0`, the stake will continue to be locked outside of the P-Chain. How each L1 handles stake associated with the validator is entirely left up to the L1 and can be treated independently to what happens on the P-Chain.
The relationship between the P-Chain and L1s provides a dynamic where L1s can use the P-Chain as an impartial judge to modify parameters (in addition to its existing role of helping to validate incoming Avalanche Warp Messages). If a validator is misbehaving, the L1 validators can collectively generate a BLS multisig to reduce the voting weight of a misbehaving validator. This operation is fully secured by the Avalanche Primary Network (225M $AVAX or $8.325B at the time of writing).
Follow-up ACPs could extend the P-Chain <-> L1 relationship to include parametrization of the 67% threshold to enable L1s to choose a different threshold based on their security model (e.g. a simple majority of 51%).
### Continuous Fee Mechanism
Every additional validator on the P-Chain adds persistent load to the Avalanche Network. When a validator transaction is issued on the P-Chain, it is charged for the computational cost of the transaction itself but is not charged for the cost of an active validator over the time they are validating on the network (which may be indefinitely). This is a common problem in blockchains, spawning many state rent proposals in the broader blockchain space to address it. The following fee mechanism takes advantage of the fact that each L1 validator uses the same amount of computation and charges each L1 validator the dynamic base fee for every discrete unit of time it is active.
To charge each L1 validator, the notion of a `Balance` is introduced. The `Balance` of a validator will be continuously charged during the time they are active to cover the cost of storing the associated validator properties (BLS key, weight, nonce) in memory and to track IPs (in addition to other services provided by the Primary Network). This `Balance` is initialized with the `RegisterL1ValidatorTx` that added them to the active validator set. `Balance` can be increased at any time using the `IncreaseL1ValidatorBalanceTx`. When this `Balance` reaches `0`, the validator will be considered "inactive" and will no longer participate in validating the L1. Inactive validators can be moved back to the active validator set at any time using the same `IncreaseL1ValidatorBalanceTx`. Once a validator is considered inactive, the P-Chain will remove these properties from memory and only retain them on disk. All messages from that validator will be considered invalid until it is revived using the `IncreaseL1ValidatorBalanceTx`. L1s can reduce the amount of inactive weight by removing inactive validators with the `SetL1ValidatorWeightTx` (`Weight` = 0).
Since each L1 validator is charged the same amount at each point in time, tracking the fees for the entire validator set is straight-forward. The accumulated dynamic base fee for the entire network is tracked in a single uint. This accumulated value should be equal to the fee charged if a validator was active from the time the accumulator was instantiated. The validator set is maintained in a priority queue. A pseudocode implementation of the continuous fee mechanism is provided below.
```python
# Pseudocode
class ValidatorQueue:
def __init__(self, fee_getter):
self.acc = 0
self.queue = PriorityQueue()
self.fee_getter = fee_getter
# At each time period, increment the accumulator and
# pop all validators from the top of the queue that
# ran out of funds.
# Note: The amount of work done in a single block
# should be bounded to prevent a large number of
# validator operations from happening at the same
# time.
def time_elapse(self, t):
self.acc = self.acc + self.fee_getter(t)
while True:
vdr = self.queue.peek()
if vdr.balance < self.acc:
self.queue.pop()
continue
return
# Validator was added
def validator_enter(self, vdr):
vdr.balance = vdr.balance + self.acc
self.queue.add(vdr)
# Validator was removed
def validator_remove(self, vdrNodeID):
vdr = find_and_remove(self.queue, vdrNodeID)
vdr.balance = vdr.balance - self.acc
vdr.refund() # Refund [vdr.balance] to [RemainingBalanceOwner]
self.queue.remove()
# Validator's balance was topped up
def validator_increase(self, vdrNodeID, balance):
vdr = find_and_remove(self.queue, vdrNodeID)
vdr.balance = vdr.balance + balance
self.queue.add(vdr)
```
#### Fee Algorithm
[ACP-103](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/103-dynamic-fees/README.md) proposes a dynamic fee mechanism for transactions on the P-Chain. This mechanism is repurposed with minor modifications for the active L1 validator continuous fee.
At activation, the number of excess active L1 validators $x$ is set to `0`.
The fee rate per second for an active L1 validator is:
$$M \cdot \exp\left(\frac{x}{K}\right)$$
Where:
- $M$ is the minimum price for an active L1 validator
- $\exp\left(x\right)$ is an approximation of $e^x$ following the EIP-4844 specification
```python
# Approximates factor * e ** (numerator / denominator) using Taylor expansion
def fake_exponential(factor: int, numerator: int, denominator: int) -> int:
i = 1
output = 0
numerator_accum = factor * denominator
while numerator_accum > 0:
output += numerator_accum
numerator_accum = (numerator_accum * numerator) // (denominator * i)
i += 1
return output // denominator
```
- $K$ is a constant to control the rate of change for the L1 validator price
After every second, $x$ will be updated:
$$x = \max(x + (V - T), 0)$$
Where:
- $V$ is the number of active L1 validators
- $T$ is the target number of active L1 validators
Whenever $x$ increases by $K$, the price per active L1 validator increases by a factor of `~2.7`. If the price per active L1 validator gets too expensive, some active L1 validators will exit the active validator set, decreasing $x$, dropping the price. The price per active L1 validator constantly adjusts to make sure that, on average, the P-Chain has no more than $T$ active L1 validators.
#### Block Processing
Before processing the transactions inside a block, all validators that no longer have a sufficient (non-zero) balance are deactivated.
After processing the transactions inside a block, all validators that do not have a sufficient balance for the next second are deactivated.
##### Block Timestamp Validity Change
To ensure that validators are charged accurately, blocks are only considered valid if advancing the chain times would not cause a validator to have a negative balance.
This upholds the expectation that the number of L1 validators remains constant between blocks.
The block building protocol is modified to account for this change by first checking if the wall clock time removes any validator due to a lack of funds. If the wall clock time does not remove any L1 validators, the wall clock time is used to build the block. If it does, the time at which the first validator gets removed is used.
##### Fee Calculation
The total validator fee assessed in $\Delta t$ is:
```python
# Calculate the fee to charge over Δt
def cost_over_time(V:int, T:int, x:int, Δt: int) -> int:
cost = 0
for _ in range(Δt):
x = max(x + V - T, 0)
cost += fake_exponential(M, x, K)
return cost
```
#### Parameters
The parameters at activation are:
| Parameter | Definition | Value |
| --------- | ------------------------------------------- | ------------- |
| $T$ | target number of validators | 10_000 |
| $C$ | capacity number of validators | 20_000 |
| $M$ | minimum fee rate | 512 nAVAX/s |
| $K$ | constant to control the rate of fee changes | 1_246_488_515 |
An $M$ of 512 nAVAX/s equates to ~1.33 AVAX/month to run an L1 validator, so long as the total number of continuous-fee-paying L1 validators stays at or below $T$.
$K$ was chosen to set the maximum fee doubling rate to ~24 hours. This is in the extreme case that the network has $C$ validators for prolonged periods of time; if the network has $T$+1 validators for example, the fee rate would double every ~27 years.
A future ACP can adjust the parameters to increase $T$, reduce $M$, and/or modify $K$.
#### User Experience
L1 validators are continuously charged a fee, albeit a small one. This poses a challenge for L1 validators: How do they maintain the balance over time?
Node clients should expose an API to track how much balance is remaining in the validator's account. This will provide a way for L1 validators to track how quickly it is going down and top-up when needed. A nice byproduct of the above design is the balance in the validator's account is claimable. This means users can top-up as much $AVAX as they want and rest-assured knowing they can always retrieve it if there is an excessive amount.
The expectation is that most users will not interact with node clients or track when or how much they need to top-up their validator account. Wallet providers will abstract away most of this process. For users who desire more convenience, L1-as-a-Service providers will abstract away all of this process.
## Backwards Compatibility
This new design for Subnets proposes a large rework to all L1-related mechanics. Rollout should be done on a going-forward basis to not cause any service disruption for live Subnets. All current Subnet validators will be able to continue validating both the Primary Network and whatever Subnets they are validating.
Any state execution changes must be coordinated through a mandatory upgrade. Implementors must take care to continue to verify the existing ruleset until the upgrade is activated. After activation, nodes should verify the new ruleset. Implementors must take care to only verify the presence of 2000 $AVAX prior to activation.
### Deactivated Transactions
- P-Chain
- `TransformSubnetTx`
After this ACP is activated, Elastic Subnets will be disabled. `TransformSubnetTx` will not be accepted post-activation. As there are no Mainnet Elastic Subnets, there should be no production impact with this deactivation.
### New Transactions
- P-Chain
- `ConvertSubnetToL1Tx`
- `RegisterL1ValidatorTx`
- `SetL1ValidatorWeightTx`
- `DisableL1ValidatorTx`
- `IncreaseL1ValidatorBalanceTx`
## Reference Implementation
ACP-77 was implemented and will be merged into AvalancheGo behind the `Etna` upgrade flag. The full body of work can be found tagged with the `acp77` label [here](https://github.com/ava-labs/avalanchego/issues?q=sort%3Aupdated-desc+label%3Aacp77).
Since Etna is not yet activated, all new transactions introduced in ACP-77 will be rejected by AvalancheGo. If any modifications are made to ACP-77 as part of the ACP process, the implementation must be updated prior to activation.
## Security Considerations
This ACP introduces Avalanche Layer 1s, a new network type that costs significantly less than Avalanche Subnets. This can lead to a large increase in the number of networks and, by extension, the number of validators. Each additional validator adds consistent RAM usage to the P-Chain. However, this should be appropriately metered by the continuous fee mechanism outlined above.
With the sovereignty L1s have from the P-Chain, L1 staking tokens are not locked on the P-Chain. This poses a security consideration for L1 validators: Malicious chains can choose to remove validators at will and take any funds that the validator has locked on the L1. The P-Chain only provides the guarantee that L1 validators can retrieve the remaining $AVAX Balance for their validator via a `DisableL1ValidatorTx`. Any assets on the L1 is entirely under the purview of the L1. The onus is on L1 validators to vet the L1's security for any assets transferred onto the L1.
With a long window of expiry (24 hours) for the Warp message in `RegisterL1ValidatorTx`, spam of validator registration could lead to high memory pressure on the P-Chain. A future ACP can reduce the window of expiry if 24 hours proves to be a problem.
NodeIDs can be added to an L1's validator set involuntarily. However, it is important to note that any stake/rewards are _not_ at risk. For a node operator who was added to a validator set involuntarily, they would only need to generate a new NodeID via key rotation as there is no lock-up of any stake to create a NodeID. This is an explicit tradeoff for easier on-boarding of NodeIDs. This mirrors the Primary Network validators guarantee of no stake/rewards at risk.
The continuous fee mechanism outlined above does not apply to inactive L1 validators since they are not stored in memory. However, inactive L1 validators are persisted on disk which can lead to persistent P-Chain state growth. A future ACP can introduce a mechanism to decrease the rate of P-Chain state growth or provide a state expiry path to reduce the amount of P-Chain state.
## Acknowledgements
Special thanks to [@StephenButtolph](https://github.com/StephenButtolph), [@aaronbuchwald](https://github.com/aaronbuchwald), and [@patrick-ogrady](https://github.com/patrick-ogrady) for their feedback on these ideas. Thank you to the broader Ava Labs Platform Engineering Group for their feedback on this ACP prior to publication.
## Copyright
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
# ACP-83: Dynamic Multidimensional Fees (/docs/acps/83-dynamic-multidimensional-fees)
---
title: "ACP-83: Dynamic Multidimensional Fees"
description: "Details for Avalanche Community Proposal 83: Dynamic Multidimensional Fees"
edit_url: https://github.com/avalanche-foundation/ACPs/edit/main/ACPs/83-dynamic-multidimensional-fees/README.md
---
| ACP | 83 |
| :--- | :--- |
| **Title** | Dynamic multidimensional fees for P-chain and X-chain |
| **Author(s)** | Alberto Benegiamo ([@abi87](https://github.com/abi87)) |
| **Status** | Stale |
| **Track** | Standards |
| **Superseded-By** | [ACP-103](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/103-dynamic-fees/README.md) |
## Abstract
Introduce a dynamic and multidimensional fees scheme for the P-chain and X-chain.
Dynamic fees helps to preserve the stability of the chain as it provides a feedback mechanism that increases the cost of resources when the network operates above its target utilization.
Multidimensional fees ensures that high demand for orthogonal resources does not drive up the price of underutilized resources. For example, networks provide and consume orthogonal resources including, but not limited to, bandwidth, chain state, read/write throughput, and CPU. By independently metering each resource, they can be granularly priced and stay closer to optimal resource utilization.
## Motivation
The P-Chain and X-Chain currently have fixed fees and in some cases those fees are fixed to zero.
This makes transaction issuance predictable, but does not provide feedback mechanism to preserve chain stability under high load. In contrast, the C-Chain, which has the highest and most regular load among the chains on the Primary Network, already supports dynamic fees. This ACP proposes to introduce a similar dynamic fee mechanism for the P-Chain and X-Chain to further improve the Primary Network's stability and resilience under load.
However, unlike the C-Chain, we propose a multidimensional fee scheme with an exponential update rule for each fee dimension. The [HyperSDK](https://github.com/ava-labs/hypersdk) already utilizes a multidimensional fee scheme with optional priority fees and its efficiency is backed by [academic research](https://arxiv.org/abs/2208.07919).
Finally, we split the fee into two parts: a `base fee` and a `priority fee`. The `base fee` is calculated by the network each block to accurately price each resource at a given point in time. Whatever amount greater than the base fee is burnt is treated as the `priority fee` to prioritize faster transaction inclusion.
## Specification
We introduce the multidimensional scheme first and then how to apply the dynamic fee update rule for each fee dimension. Finally we list the new block verification rules, valid once the new fee scheme activates.
### Multidimensional scheme components
We define four fee dimensions, `Bandwidth`, `Reads`, `Writes`, `Compute`, to describe transactions complexity. In more details:
- `Bandwidth` measures the transaction size in bytes, as encoded by the AvalancheGo codec. Byte length is a proxy for the network resources needed to disseminate the transaction.
- `Reads` measures the number of DB reads needed to verify the transactions. DB reads include UTXOs reads and any other state quantity relevant for the specific transaction.
- `Writes` measures the number of DB writes following the transaction verification. DB writes include UTXOs generated as outputs of the transactions and any other state quantity relevant for the specific transaction.
- `Compute` measures the number of signatures to be verified, including UTXOs ones and those related to authorization of specific operations.
For each fee dimension $i$, we define:
- *fee rate* $r_i$ as the price, denominated in AVAX, to be paid for a transaction with complexity $u_i$ along the fee dimension $i$.
- *base fee* as the minimal fee needed to accept a transaction. Base fee is given be the formula
$$base \ fee = \sum_{i=0}^3 r_i \times u_i$$
- *priority fee* as an optional fee paid on top of the base fee to speed up the transaction inclusion in a block.
### Dynamic scheme components
Fee rates are updated in time, to allow a fee increase when network is getting congested. Each new block is a potential source of congestion, as its transactions carry complexity that each validator must process to verify and eventually accept the block. The more complexity carries a block, and the more rapidly blocks are produced, the higher the congestion.
We seek a scheme that rapidly increases the fees when blocks complexity goes above a defined threshold and that equally rapidly decreases the fees once complexity goes down (because blocks carry less/simpler transactions, or because they are produced more slowly). We define the desired threshold as a *target complexity rate* $T$: we would want to process every second a block whose complexity is $T$. Any complexity more than that causes some congestion that we want to penalize via fees.
In order to update fees rates we track, per each block and each fee dimension, a parameter called cumulative excess complexity. Fee rates applied to a block will be defined in terms of cumulative excess complexity as we show in the following.
Suppose that a block $B_t$ is the current chain tip. $B_t$ has the following features:
- $t$ is its timestamp.
- $\Delta C_t$ is the cumulative excess complexity along fee dimension $i$.
Say a new block $B_{t + \Delta T}$ is built on top of $B$, with the following features:
- $t + \Delta T$ is its timestamp
- $C_{t + \Delta T}$ is its complexity along fee dimension $i$.
Then the fee rate $r_{t + \Delta T}$ applied for the block $B_{t + \Delta T}$ along dimension $i$ will be:
$$ r_{t + \Delta T} = r^{min} \times e^{\frac{max(0, \Delta C_t - T \times \Delta T)}{Denom}} $$
where
- $r^{min}$ is the minimal fee rate along fee dimension $i$
- $T$ is the target complexity rate along fee dimension $i$
- $Denom$ is a normalization constant for the fee dimension $i$
Moreover, once the block $B_{t + \Delta T}$ is accepted, the cumulative excess complexity is updated as follows:
$$\Delta C_{t + \Delta T} = max\large(0, \Delta C_{t} - T \times \Delta T\large) + C_{t + \Delta T}$$
The fee rate update formula guarantees that fee rates increase if incoming blocks are complex (large $C_{t + \Delta T}$) and if blocks are emitted rapidly (small $\Delta T$). Symmetrically, fee rates decrease to the minimum if incoming blocks are less complex and if blocks are produced less frequently.
The update formula has a few paramenters to be tuned, independently, for each fee dimension. We defer discussion about tuning to the [implementation section](#tuning-the-update-formula).
## Block verification rules
Upon activation of the dynamic multidimensional fees scheme we modify block processing as follows:
- **Bound block complexity**. For each fee dimension $i$, we define a *maximal block complexity* $Max$. A block is only valid if the block complexity $C$ is less than the maximum block complexity: $C \leq Max$.
- **Verify transaction fee**. When verifying each transaction in a block, we confirm that it can cover its own base fee. Note that both base fee and optional priority fees are burned.
## User Experience
### How will the wallets estimate the fees?
AvalancheGo nodes will provide new APIs exposing the current and expected fee rates, as they are likely to change block by block. Wallets can then use the fees rates to select UTXOs to pay the transaction fees. Moreover, the AvalancheGo implementation proposed above offers a `fees.Calculator` struct that can be reused by wallets and downstream projects to evaluate calculate fees.
### How will wallets be able to re-issue Txs at a higher fee?
Wallets should be able to simply re-issue the transaction since current AvalancheGo implementation drops mempool transactions whose fee rate is lower than current one. More specifically, a transaction may be valid the moment it enters the mempool and it won’t be re-verified as long as it stays in there. However, as soon as the transaction is selected to be included in the next block, it is re-verified against the latest preferred tip. If fees are not enough by this time, the transaction is dropped and the wallet can simply re-issue it at a higher fee, or wait for the fee rate to go down. Note that priority fees offer some buffer space against an increase in the fee rate. A transaction paying just the base fee will be evicted from the mempool in the face of a fee rate increase, while a transaction paying some extra priority fee may have enough buffer room to stay valid after some amount of fee increase.
### How does priority fees guarantee a faster block inclusion?
AvalancheGo mempool will be restructured to order transactions by priority fees. Transactions paying priority fees will be selected for block inclusion first, without violating any spend dependency.
## Backwards Compatibility
Modifying the fee scheme for P-Chain and X-Chain requires a mandatory upgrade for activation. Moreover, wallets must be modified to properly handle the new fee scheme once activated.
## Reference Implementation
The implementation is split across multiple PRs:
- P-Chain work is tracked in this issue: [https://github.com/ava-labs/avalanchego/issues/2707](https://github.com/ava-labs/avalanchego/issues/2707)
- X-Chain work is tracked in this issue: [https://github.com/ava-labs/avalanchego/issues/2708](https://github.com/ava-labs/avalanchego/issues/2708)
A very important implementation step is tuning the update formula parameters for each chain and each fee dimension. We show here the principles we followed for tuning and a simulation based on historical data.
### Tuning the update formula
The basic idea is to measure the complexity of blocks already accepted and derive the parameters from it. You can find the historical data in [this repo](https://github.com/abi87/complexities).
To simplify the exposition I am purposefully ignoring chain specifics (like P-chain proposal blocks). We can account for chain specifics while processing the historical data. Here are the principles:
- **Target block complexity rate $T$**: calculate the distribution of block complexity and pick a high enough quantile.
- **Max block complexity $Max$**: this is probably the trickiest parameter to set.
Historically we had [pretty big transactions](https://subnets.avax.network/p-chain/tx/27pjHPRCvd3zaoQUYMesqtkVfZ188uP93zetNSqk3kSH1WjED1) (more than $1.000$ referenced utxos). Setting a max block complexity so high that these big transactions are allowed is akin to setting no complexity cap.
On the other side, we still want to allow, even encourage, UTXO consolidation, so we may want to allow transactions [like this](https://subnets.avax.network/p-chain/tx/2LxyHzbi2AGJ4GAcHXth6pj5DwVLWeVmog2SAfh4WrqSBdENhV).
A principled way to set max block complexity may be the following:
- calculate the target block complexity rate (see previous point)
- calculate the median time elapsed among consecutive blocks
- The product of these two quantities should gives us something like a target block complexity.
- Set the max block complexity to say $\times 50$ the target value.
- **Normalization coefficient $Denom$**: I suggest we size it as follows:
- Find the largest historical peak, i.e. the sequence of consecutive blocks which contained the most complexity in the shortest period of time
- Tune $Denom$ so that it would cause a $\times 10000$ increase in the fee rate for such a peak. This increase would push fees from the milliAVAX we normally pay under stable network condition up to tens of AVAX.
- **Minimal fee rates $r^{min}$**: we could size them so that transactions fees do not change very much with respect to the currently fixed values.
We simulate below how the update formula would behave on an peak period from Avalanche mainnet.
/>
/>
>
Discover how to build sovereign networks with custom rules and token economics.
Access data APIs for the C-Chain, P-Chain, and X-Chain.
Access developer tools, deploy contracts, and manage your blockchain infrastructure.
### Features
* **Chain Throughput:** Retrieve detailed metrics on gas consumption, Transactions Per Second (TPS), and gas prices, including rolling windows of data for granular analysis.
* **Cumulative Metrics:** Access cumulative data on addresses, contracts, deployers, and transaction counts, providing insights into network growth over time.
* **Staking Information:** Obtain staking-related data, including the number of validators and delegators, along with their respective weights, across different subnets.
* **Blockchains and Subnets:** Get information about supported blockchains, including EVM Chain IDs, blockchain IDs, and subnet associations, facilitating multi-chain analytics.
* **Composite Queries:** Perform advanced queries by combining different metric types and conditions, enabling detailed and customizable data retrieval.
The Metrics API is designed to provide developers with powerful tools to analyze and monitor on-chain activity across Avalanche’s primary network, Avalanche L1s, and other supported EVM chains. Below is an overview of the key features available:
### Chain Throughput Metrics
* **Gas Consumption**
* **Transactions Per Second (TPS)**
* **Gas Prices**
### Cumulative Metrics
* **Address Growth**
* **Contract Deployment**
* **Transaction Count**
### Staking Information
* **Validator and Delegator Counts**
* **Staking Weights**
### Rolling Window Analytics
* **Short-Term and Long-Term Metrics:** Perform rolling window analysis on various metrics like gas used, TPS, and gas prices, allowing for both short-term and long-term trend analysis.
* **Customizable Time Frames:** Choose from different time intervals (hourly, daily, monthly) to suit your specific analytical needs.
### Blockchain and L1 Information
* **Chain and L1 Mapping:** Get detailed information about EVM chains and their associated L1s, including chain IDs, blockchain IDs, and subnet IDs, facilitating cross-chain analytics.
### Advanced Composite Queries
* **Custom Metrics Combinations**: Combine multiple metrics and apply logical operators to perform sophisticated queries, enabling deep insights and tailored analytics.
* **Paginated Results:** Handle large datasets efficiently with paginated responses, ensuring seamless data retrieval in your applications.
The Metrics API equips developers with the tools needed to build robust analytics, monitoring, and reporting solutions, leveraging the full power of multi-chain data across the Avalanche ecosystem and beyond.
# Rate Limits (/docs/api-reference/metrics-api/rate-limits)
---
title: Rate Limits
description: Rate Limits for the Metrics API
icon: Clock
---
# Rate Limits
Rate limiting is managed through a weighted scoring system, known as Compute Units (CUs). Each API request consumes a specified number of CUs, determined by the complexity of the request. This system is designed to accommodate basic requests while efficiently handling more computationally intensive operations.
## Rate Limit Tiers
The maximum CUs (rate-limiting score) for a user depends on their subscription level and is delineated in the following table:
| Subscription Level | Per Minute Limit (CUs) | Per Day Limit (CUs) |
| :----------------- | :--------------------- | :------------------ |
| Free | 8,000 | 1,200,000 |
> We are working on new subscription tiers with higher rate limits to support even greater request volumes.
## Rate Limit Categories
The CUs for each category are defined in the following table:
| Weight | CU Value |
| :----- | :------- |
| Free | 1 |
| Small | 20 |
| Medium | 100 |
| Large | 500 |
| XL | 1000 |
| XXL | 3000 |
## Rate Limits for Metrics Endpoints
The CUs for each route are defined in the table below:
| Endpoint | Method | Weight | CU Value |
| :---------------------------------------------------------- | :----- | :----- | :------- |
| `/v2/health-check` | GET | Free | 1 |
| `/v2/chains` | GET | Free | 1 |
| `/v2/chains/{chainId}` | GET | Free | 1 |
| `/v2/chains/{chainId}/metrics/{metric}` | GET | Medium | 100 |
| `/v2/chains/{chainId}/teleporterMetrics/{metric}` | GET | Medium | 100 |
| `/v2/chains/{chainId}/rollingWindowMetrics/{metric}` | GET | Medium | 100 |
| `/v2/networks/{network}/metrics/{metric}` | GET | Medium | 100 |
| `/v2/chains/{chainId}/contracts/{address}/nfts:listHolders` | GET | Large | 500 |
| `/v2/chains/{chainId}/contracts/{address}/balances` | GET | XL | 1000 |
| `/v2/chains/43114/btcb/bridged:getAddresses` | GET | Large | 500 |
| `/v2/subnets/{subnetId}/validators:getAddresses` | GET | Large | 500 |
| `/v2/lookingGlass/compositeQuery` | POST | XXL | 3000 |
### Key Features:
* **Real-time notifications:** Receive immediate updates on specified on-chain activities without polling.
* **Customizable:** Specify the desired event type to listen for, customizing notifications based on your individual requirements.
* **Secure:** Employ shared secrets and signature-based verification to ensure that notifications originate from a trusted source.
* **Broad Coverage:**
* **C-chain:** Mainnet and testnet, covering smart contract events, NFT transfers, and wallet-to-wallet transactions.
* **Platform Chain (P and X chains):** Address and validator events, staking activities, and other platform-level transactions.
By supporting both the C-chain and the Platform Chain, you can monitor an even wider range of Avalanche activities.
### Use cases
* **NFT marketplace transactions**: Get alerts for NFT minting, transfers, auctions, bids, sales, and other interactions within NFT marketplaces.
* **Wallet notifications**: Receive alerts when an address performs actions such as sending, receiving, swapping, or burning assets.
* **DeFi activities**: Receive notifications for various DeFi activities such as liquidity provisioning, yield farming, borrowing, lending, and liquidations.
* **Staking rewards:** Get real-time notifications when a validator stakes, receives delegation, or earns staking rewards on the P-Chain, enabling seamless monitoring of validator earnings and participation.
## APIs for continuous polling vs. Webhooks for events data
The following example uses the address activity webhook topic to illustrate the difference between polling an API for wallet event data versus subscribing to a webhook topic to receive wallet events.
### Continous polling
Continuous polling is a method where your application repeatedly sends requests to an API at fixed intervals to check for new data or events. Think of it like checking your mailbox every five minutes to see if new mail has arrived, whether or not anything is there.
* You want to track new transactions for a specific wallet.
* Your application calls an API every few seconds (e.g., every 5 seconds) with a query like, “Are there any new transactions for this wallet since my last check?”
* The API responds with either new transaction data or a confirmation that nothing has changed.
**Downsides of continuous polling**
* **Inefficiency:** Your app makes requests even when no new transactions occur, wasting computational resources, bandwidth, and potentially incurring higher API costs.
For example, if no transactions happen for an hour, your app still sends hundreds of unnecessary requests.
* **Delayed updates:**
Since polling happens at set intervals, there’s a potential delay in detecting events. If a transaction occurs just after a poll, your app won’t know until the next check—up to 5 seconds later in our example.
This lag can be critical for time-sensitive applications, like trading or notifications.
* **Scalability challenges:** Monitoring one wallet might be manageable, but if you’re tracking dozens or hundreds of wallets, the number of requests multiplies quickly.
### Webhook subscription
Webhooks are an event-driven alternative where your application subscribes to specific events, and the Avalanche service notifies you instantly when those events occur. It’s like signing up for a delivery alert—when the package (event) arrives, you get a text message right away, instead of checking the tracking site repeatedly.
* Your app registers a webhook specifying an endpoint (e.g., `https://your-app.com/webhooks/transactions`) and the event type (e.g., `address_activity`).
* When a new transaction occurs we send a POST request to your endpoint with the transaction details.
* Your app receives the data only when something happens, with no need to ask repeatedly.
**Benefits of Avalanche webhooks**
* **Real-Time updates:** Notifications arrive the moment a transaction is processed, eliminating delays inherent in polling. This is ideal for applications needing immediate responses, like alerting users or triggering automated actions.
* **Efficiency:** Your app doesn’t waste resources making requests when there’s no new data. Data flows only when events occur. This reduces server load, bandwidth usage, and API call quotas.
* **Scalability:** You can subscribe to events for multiple wallets or event types (e.g., transactions, smart contract calls) without increasing the number of requests your app makes. We handle the event detection and delivery, so your app scales effortlessly as monitoring needs grow.
## Event payload structure
The Event structure always begins with the following parameters:
```json theme={null}
{
"webhookId": "6d1bd383-aa8d-47b5-b793-da6d8a115fde",
"eventType": "address_activity",
"messageId": "8e4e7284-852a-478b-b425-27631c8d22d2",
"event": {
}
}
```
**Parameters:**
* `webhookId`: Unique identifier for the webhook in your account.
* `eventType`: The event that caused the webhook to be triggered. In the future, there will be multiple types of events, for the time being only the address\_activity event is supported. The address\_activity event gets triggered whenever the specified addresses participate in a token or AVAX transaction.
* `messageId`: Unique identifier per event sent.
* `event`: Event payload. It contains details about the transaction, logs, and traces. By default logs and internal transactions are not included, if you want to include them use `"includeLogs": true`, and `"includeInternalTxs": true`.
### Address Activity webhook
The address activity webhook allows you to track any interaction with an address (any address). Here is an example of this type of event:
```json theme={null}
{
"webhookId": "263942d1-74a4-4416-aeb4-948b9b9bb7cc",
"eventType": "address_activity",
"messageId": "94df1881-5d93-49d1-a1bd-607830608de2",
"event": {
"transaction": {
"blockHash": "0xbd093536009f7dd785e9a5151d80069a93cc322f8b2df63d373865af4f6ee5be",
"blockNumber": "44568834",
"from": "0xf73166f0c75a3DF444fAbdFDC7e5EE4a73fA51C7",
"gas": "651108",
"gasPrice": "31466275484",
"maxFeePerGas": "31466275484",
"maxPriorityFeePerGas": "31466275484",
"txHash": "0xf6a791920652e87ccc91d2f1b20c1505a94452b88f359acdeb5a6fa8205638c4",
"txStatus": "1",
"input": "0xb80c2f090000000000000000000000000000000000000000000000000000000000000000000000000000000000000000eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee000000000000000000000000b97ef9ef8734c71904d8002f8b6bc66dd9c48a6e000000000000000000000000000000000000000000000000006ca0c737b131f2000000000000000000000000000000000000000000000000000000000011554e000000000000000000000000000000000000000000000000000000006627dadc0000000000000000000000000000000000000000000000000000000000000120000000000000000000000000000000000000000000000000000000000000016000000000000000000000000000000000000000000000000000000000000004600000000000000000000000000000000000000000000000000000000000000001000000000000000000000000000000000000000000000000006ca0c737b131f2000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000001000000000000000000000000000000000000000000000000000000000000002000000000000000000000000000000000000000000000000000000000000000a000000000000000000000000000000000000000000000000000000000000000e000000000000000000000000000000000000000000000000000000000000001200000000000000000000000000000000000000000000000000000000000000160000000000000000000000000b31f66aa3c1e785363f0875a1b74e27b85fd66c70000000000000000000000000000000000000000000000000000000000000001000000000000000000000000be882fb094143b59dc5335d32cecb711570ebdd40000000000000000000000000000000000000000000000000000000000000001000000000000000000000000be882fb094143b59dc5335d32cecb711570ebdd400000000000000000000000000000000000000000000000000000000000000010000000000000000000027100e663593657b064e1bae76d28625df5d0ebd44210000000000000000000000000000000000000000000000000000000000000001000000000000000000000000000000000000000000000000000000000000002000000000000000000000000000000000000000000000000000000000000000c0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000400000000000000000000000000000000000000000000000000000000000000060000000000000000000000000b31f66aa3c1e785363f0875a1b74e27b85fd66c7000000000000000000000000b97ef9ef8734c71904d8002f8b6bc66dd9c48a6e0000000000000000000000000000000000000000000000000000000000000bb80000000000000000000000000000000000000000000000000000000000000000",
"nonce": "4",
"to": "0x1dac23e41fc8ce857e86fd8c1ae5b6121c67d96d",
"transactionIndex": 0,
"value": "30576074978046450",
"type": 0,
"chainId": "43114",
"receiptCumulativeGasUsed": "212125",
"receiptGasUsed": "212125",
"receiptEffectiveGasPrice": "31466275484",
"receiptRoot": "0xf355b81f3e76392e1b4926429d6abf8ec24601cc3d36d0916de3113aa80dd674",
"erc20Transfers": [
{
"transactionHash": "0xf6a791920652e87ccc91d2f1b20c1505a94452b88f359acdeb5a6fa8205638c4",
"type": "ERC20",
"from": "0x1daC23e41Fc8ce857E86fD8C1AE5b6121C67D96d",
"to": "0xbe882fb094143B59Dc5335D32cEcB711570EbDD4",
"value": "30576074978046450",
"blockTimestamp": 1713884373,
"logIndex": 2,
"erc20Token": {
"address": "0xB31f66AA3C1e785363F0875A1B74E27b85FD66c7",
"name": "Wrapped AVAX",
"symbol": "WAVAX",
"decimals": 18,
"valueWithDecimals": "0.030576074978046448"
}
},
{
"transactionHash": "0xf6a791920652e87ccc91d2f1b20c1505a94452b88f359acdeb5a6fa8205638c4",
"type": "ERC20",
"from": "0x0E663593657B064e1baE76d28625Df5D0eBd4421",
"to": "0xf73166f0c75a3DF444fAbdFDC7e5EE4a73fA51C7",
"value": "1195737",
"blockTimestamp": 1713884373,
"logIndex": 3,
"erc20Token": {
"address": "0xB97EF9Ef8734C71904D8002F8b6Bc66Dd9c48a6E",
"name": "USD Coin",
"symbol": "USDC",
"decimals": 6,
"valueWithDecimals": "1.195737"
}
},
{
"transactionHash": "0xf6a791920652e87ccc91d2f1b20c1505a94452b88f359acdeb5a6fa8205638c4",
"type": "ERC20",
"from": "0xbe882fb094143B59Dc5335D32cEcB711570EbDD4",
"to": "0x0E663593657B064e1baE76d28625Df5D0eBd4421",
"value": "30576074978046450",
"blockTimestamp": 1713884373,
"logIndex": 4,
"erc20Token": {
"address": "0xB31f66AA3C1e785363F0875A1B74E27b85FD66c7",
"name": "Wrapped AVAX",
"symbol": "WAVAX",
"decimals": 18,
"valueWithDecimals": "0.030576074978046448"
}
}
],
"erc721Transfers": [],
"erc1155Transfers": [],
"internalTransactions": [
{
"from": "0xf73166f0c75a3DF444fAbdFDC7e5EE4a73fA51C7",
"to": "0x1daC23e41Fc8ce857E86fD8C1AE5b6121C67D96d",
"internalTxType": "CALL",
"value": "30576074978046450",
"gasUsed": "212125",
"gasLimit": "651108",
"transactionHash": "0xf6a791920652e87ccc91d2f1b20c1505a94452b88f359acdeb5a6fa8205638c4"
},
{
"from": "0x1daC23e41Fc8ce857E86fD8C1AE5b6121C67D96d",
"to": "0xF2781Bb34B6f6Bb9a6B5349b24de91487E653119",
"internalTxType": "DELEGATECALL",
"value": "30576074978046450",
"gasUsed": "176417",
"gasLimit": "605825",
"transactionHash": "0xf6a791920652e87ccc91d2f1b20c1505a94452b88f359acdeb5a6fa8205638c4"
},
{
"from": "0x1daC23e41Fc8ce857E86fD8C1AE5b6121C67D96d",
"to": "0xB97EF9Ef8734C71904D8002F8b6Bc66Dd9c48a6E",
"internalTxType": "STATICCALL",
"value": "0",
"gasUsed": "9750",
"gasLimit": "585767",
"transactionHash": "0xf6a791920652e87ccc91d2f1b20c1505a94452b88f359acdeb5a6fa8205638c4"
},
{
"from": "0xB97EF9Ef8734C71904D8002F8b6Bc66Dd9c48a6E",
"to": "0x30DFE0469803BcE76F8F62aC24b18d33D3d6FfE6",
"internalTxType": "DELEGATECALL",
"value": "0",
"gasUsed": "2553",
"gasLimit": "569571",
"transactionHash": "0xf6a791920652e87ccc91d2f1b20c1505a94452b88f359acdeb5a6fa8205638c4"
},
{
"from": "0x1daC23e41Fc8ce857E86fD8C1AE5b6121C67D96d",
"to": "0xB31f66AA3C1e785363F0875A1B74E27b85FD66c7",
"internalTxType": "CALL",
"value": "30576074978046450",
"gasUsed": "23878",
"gasLimit": "566542",
"transactionHash": "0xf6a791920652e87ccc91d2f1b20c1505a94452b88f359acdeb5a6fa8205638c4"
},
{
"from": "0x1daC23e41Fc8ce857E86fD8C1AE5b6121C67D96d",
"to": "0xB31f66AA3C1e785363F0875A1B74E27b85FD66c7",
"internalTxType": "CALL",
"value": "0",
"gasUsed": "25116",
"gasLimit": "540114",
"transactionHash": "0xf6a791920652e87ccc91d2f1b20c1505a94452b88f359acdeb5a6fa8205638c4"
},
{
"from": "0x1daC23e41Fc8ce857E86fD8C1AE5b6121C67D96d",
"to": "0xbe882fb094143B59Dc5335D32cEcB711570EbDD4",
"internalTxType": "CALL",
"value": "0",
"gasUsed": "81496",
"gasLimit": "511279",
"transactionHash": "0xf6a791920652e87ccc91d2f1b20c1505a94452b88f359acdeb5a6fa8205638c4"
},
{
"from": "0xbe882fb094143B59Dc5335D32cEcB711570EbDD4",
"to": "0xB31f66AA3C1e785363F0875A1B74E27b85FD66c7",
"internalTxType": "STATICCALL",
"value": "0",
"gasUsed": "491",
"gasLimit": "501085",
"transactionHash": "0xf6a791920652e87ccc91d2f1b20c1505a94452b88f359acdeb5a6fa8205638c4"
},
{
"from": "0xbe882fb094143B59Dc5335D32cEcB711570EbDD4",
"to": "0x0E663593657B064e1baE76d28625Df5D0eBd4421",
"internalTxType": "CALL",
"value": "0",
"gasUsed": "74900",
"gasLimit": "497032",
"transactionHash": "0xf6a791920652e87ccc91d2f1b20c1505a94452b88f359acdeb5a6fa8205638c4"
},
{
"from": "0x0E663593657B064e1baE76d28625Df5D0eBd4421",
"to": "0xB97EF9Ef8734C71904D8002F8b6Bc66Dd9c48a6E",
"internalTxType": "CALL",
"value": "0",
"gasUsed": "32063",
"gasLimit": "463431",
"transactionHash": "0xf6a791920652e87ccc91d2f1b20c1505a94452b88f359acdeb5a6fa8205638c4"
},
{
"from": "0xB97EF9Ef8734C71904D8002F8b6Bc66Dd9c48a6E",
"to": "0x30DFE0469803BcE76F8F62aC24b18d33D3d6FfE6",
"internalTxType": "DELEGATECALL",
"value": "0",
"gasUsed": "31363",
"gasLimit": "455542",
"transactionHash": "0xf6a791920652e87ccc91d2f1b20c1505a94452b88f359acdeb5a6fa8205638c4"
},
{
"from": "0x0E663593657B064e1baE76d28625Df5D0eBd4421",
"to": "0xB31f66AA3C1e785363F0875A1B74E27b85FD66c7",
"internalTxType": "STATICCALL",
"value": "0",
"gasUsed": "2491",
"gasLimit": "430998",
"transactionHash": "0xf6a791920652e87ccc91d2f1b20c1505a94452b88f359acdeb5a6fa8205638c4"
},
{
"from": "0x0E663593657B064e1baE76d28625Df5D0eBd4421",
"to": "0xbe882fb094143B59Dc5335D32cEcB711570EbDD4",
"internalTxType": "CALL",
"value": "0",
"gasUsed": "7591",
"gasLimit": "427775",
"transactionHash": "0xf6a791920652e87ccc91d2f1b20c1505a94452b88f359acdeb5a6fa8205638c4"
},
{
"from": "0xbe882fb094143B59Dc5335D32cEcB711570EbDD4",
"to": "0xB31f66AA3C1e785363F0875A1B74E27b85FD66c7",
"internalTxType": "CALL",
"value": "0",
"gasUsed": "6016",
"gasLimit": "419746",
"transactionHash": "0xf6a791920652e87ccc91d2f1b20c1505a94452b88f359acdeb5a6fa8205638c4"
},
{
"from": "0x0E663593657B064e1baE76d28625Df5D0eBd4421",
"to": "0xB31f66AA3C1e785363F0875A1B74E27b85FD66c7",
"internalTxType": "STATICCALL",
"value": "0",
"gasUsed": "491",
"gasLimit": "419670",
"transactionHash": "0xf6a791920652e87ccc91d2f1b20c1505a94452b88f359acdeb5a6fa8205638c4"
},
{
"from": "0x1daC23e41Fc8ce857E86fD8C1AE5b6121C67D96d",
"to": "0xB97EF9Ef8734C71904D8002F8b6Bc66Dd9c48a6E",
"internalTxType": "STATICCALL",
"value": "0",
"gasUsed": "3250",
"gasLimit": "430493",
"transactionHash": "0xf6a791920652e87ccc91d2f1b20c1505a94452b88f359acdeb5a6fa8205638c4"
},
{
"from": "0xB97EF9Ef8734C71904D8002F8b6Bc66Dd9c48a6E",
"to": "0x30DFE0469803BcE76F8F62aC24b18d33D3d6FfE6",
"internalTxType": "DELEGATECALL",
"value": "0",
"gasUsed": "2553",
"gasLimit": "423121",
"transactionHash": "0xf6a791920652e87ccc91d2f1b20c1505a94452b88f359acdeb5a6fa8205638c4"
},
{
"from": "0x1daC23e41Fc8ce857E86fD8C1AE5b6121C67D96d",
"to": "0xB97EF9Ef8734C71904D8002F8b6Bc66Dd9c48a6E",
"internalTxType": "STATICCALL",
"value": "0",
"gasUsed": "1250",
"gasLimit": "426766",
"transactionHash": "0xf6a791920652e87ccc91d2f1b20c1505a94452b88f359acdeb5a6fa8205638c4"
},
{
"from": "0xB97EF9Ef8734C71904D8002F8b6Bc66Dd9c48a6E",
"to": "0x30DFE0469803BcE76F8F62aC24b18d33D3d6FfE6",
"internalTxType": "DELEGATECALL",
"value": "0",
"gasUsed": "553",
"gasLimit": "419453",
"transactionHash": "0xf6a791920652e87ccc91d2f1b20c1505a94452b88f359acdeb5a6fa8205638c4"
}
],
"blockTimestamp": 1713884373
}
}
}
```
# Rate Limits (/docs/api-reference/webhook-api/rate-limits)
---
title: Rate Limits
description: Rate Limits for the Webhooks API
icon: Clock
---
Rate limiting is managed through a weighted scoring system, known as Compute Units (CUs). Each API request consumes a specified number of CUs, determined by the complexity of the request. This system is designed to accommodate basic requests while efficiently handling more computationally intensive operations.
## Rate Limit Tiers
The maximum CUs (rate-limiting score) for a user depends on their subscription level and is delineated in the following table:
| Subscription Level | Per Minute Limit (CUs) | Per Day Limit (CUs) |
| :----------------- | :--------------------- | :------------------ |
| Unauthenticated | 6,000 | 1,200,000 |
| Free | 8,000 | 2,000,000 |
| Base | 10,000 | 3,750,000 |
| Growth | 14,000 | 11,200,000 |
| Pro | 20,000 | 25,000,000 |
To update your subscription level use the [AvaCloud Portal](https://app.avacloud.io/)
* **Attempt 1:** We send the message expecting a respose with `200` status code. If we do not receive a `200` status code within **10 seconds**, the attempt is considered failed. During this window, any non-`2xx` responses are ignored.
* **Attempt 2:** Occurs **10 seconds** after the first attempt, with another 10-second timeout and the same rule for ignoring non-`2xx` responses.
* **Retry Queue After Two Failed Attempts**
If both initial attempts fail, the message enters a **retry queue** with progressively longer intervals between attempts. Each retry attempt still has a 10-second timeout, and non-`2xx` responses are ignored during this window.
The retry schedule is as follows:
| Attempt | Interval |
| ------- | -------- |
| 3 | 1 min |
| 4 | 5 min |
| 5 | 10 min |
| 6 | 30 min |
| 7 | 2 hours |
| 8 | 6 hours |
| 9 | 12 hours |
| 10 | 24 hours |
**Total Retry Duration:** Up to approximately 44.8 hours (2,688 minutes) if all retries are exhausted.
**Interval Timing:** Each retry interval starts 10 seconds after the previous attempt is deemed failed. For example, if attempt 2 fails at t=20 seconds, attempt 3 will start at t=80 seconds (20s + 1 minute interval + 10s).
**WebSockets**
* The app connects to the Avalanche RPC API over WSS to receive raw log data.
* It must decode logs, manage connection state, and store data locally.
* On disconnection, it must re-sync via an external Data API or using standard `eth_*` RPC calls (e.g., `eth_getLogs`, `eth_getBlockByNumber`).
Developers generally have two options to fetch this data:
1. **Using RPC methods to index blockchain data on their own**
2. **Leveraging an indexer provider like the Data API**
While both methods aim to achieve the same goal, the Data API offers a more efficient, scalable, and developer-friendly solution. This article delves into why using the Data API is better than relying on traditional RPC (Remote Procedure Call) methods.
### What Are RPC methods and their challenges?
Remote Procedure Call (RPC) methods allow developers to interact directly with blockchain nodes. One of their key advantages is that they are standardized and universally understood by blockchain developers across different platforms. With RPC, you can perform tasks such as querying data, submitting transactions, and interacting with smart contracts. These methods are typically low-level and synchronous, meaning they require a deep understanding of the blockchain’s architecture and specific command structures.
You can refer to the [official documentation](https://ethereum.org/en/developers/docs/apis/json-rpc/) to gain a more comprehensive understanding of the JSON-RPC API.
Here’s an example using the `eth_getBalance` method to retrieve the native balance of a wallet:
```bash
curl --location 'https://api.avax.network/ext/bc/C/rpc' \
--header 'Content-Type: application/json' \
--data '{"method":"eth_getBalance","params":["0x8ae323046633A07FB162043f28Cea39FFc23B50A", "latest"],"id":1,"jsonrpc":"2.0"}'
```
This call returns the following response:
```json
{
"jsonrpc": "2.0",
"id": 1,
"result": "0x284476254bc5d594"
}
```
The balance in this wallet is 2.9016 AVAX. However, despite the wallet holding multiple tokens such as USDC, the `eth_getBalance` method only returns the AVAX amount and it does so in Wei and in hexadecimal format. This is not particularly human-readable, adding to the challenge for developers who need to manually convert the balance to a more understandable format.
#### No direct RPC methods to retrieve token balances
Despite their utility, RPC methods come with significant limitations when it comes to retrieving detailed token and transaction data. Currently, RPC methods do not provide direct solutions for the following:
* **Listing all tokens held by a wallet**: There is no RPC method that provides a complete list of ERC-20 tokens owned by a wallet.
* **Retrieving all transactions for a wallet**: : There is no direct method for fetching all transactions associated with a wallet.
* **Getting ERC-20/721/1155 token balances**: The `eth_getBalance` method only returns the balance of the wallet’s native token (such as AVAX on Avalanche) and cannot be used to retrieve ERC-20/721/1155 token balances.
To achieve these tasks using RPC methods alone, you would need to:
* **Query every block for transaction logs**: Scan the entire blockchain, which is resource-intensive and impractical.
* **Parse transaction logs**: Identify and extract ERC-20 token transfer events from each transaction.
* **Aggregate data**: Collect and process this data to compute balances and transaction histories.
#### Manual blockchain indexing is difficult and costly
Using RPC methods to fetch token balances involves an arduous process:
1. You must connect to a node and subscribe to new block events.
2. For each block, parse every transaction to identify ERC-20 token transfers involving the user's address.
3. Extract contract addresses and other relevant data from the parsed transactions.
4. Compute balances by processing transfer events.
5. Store the processed data in a database for quick retrieval and aggregation.
#### Why this is difficult:
* **Resource-Intensive**: Requires significant computational power and storage to process and store blockchain data.
* **Time-consuming**: Processing millions of blocks and transactions can take an enormous amount of time.
* **Complexity**: Handling edge cases like contract upgrades, proxy contracts, and non-standard implementations adds layers of complexity.
* **Maintenance**: Keeping the indexed data up-to-date necessitates continuous synchronization with new blocks being added to the blockchain.
* **High Costs**: Associated with servers, databases, and network bandwidth.
### The Data API Advantage
The Data API provides a streamlined, efficient, and scalable solution for fetching token balances. Here's why it's the best choice:
With a single API call, you can retrieve all ERC-20 token balances for a user's address:
```javascript
avalancheSDK.data.evm.balances.listErc20Balances({
address: "0xYourAddress",
});
```
Sample Response:
```json
{
"erc20TokenBalances": [
{
"ercType": "ERC-20",
"chainId": "43114",
"address": "0xB97EF9Ef8734C71904D8002F8b6Bc66Dd9c48a6E",
"name": "USD Coin",
"symbol": "USDC",
"decimals": 6,
"price": {
"value": 1.0,
"currencyCode": "usd"
},
"balance": "15000000",
"balanceValue": {
"currencyCode": "usd",
"value": 9.6
},
"logoUri": "https://images.ctfassets.net/gcj8jwzm6086/e50058c1-2296-4e7e-91ea-83eb03db95ee/8db2a492ce64564c96de87c05a3756fd/43114-0xB97EF9Ef8734C71904D8002F8b6Bc66Dd9c48a6E.png"
}
// Additional tokens...
]
}
```
As you can see with a single call the API returns an array of token balances for all the wallet tokens, including:
* **Token metadata**: Contract address, name, symbol, decimals.
* **Balance information**: Token balance in both hexadecimal and decimal formats, Also retrieves balances of native assets like ETH or AVAX.
* **Price data**: Current value in USD or other supported currencies, saving you the effort of integrating another API.
* **Visual assets**: Token logo URI for better user interface integration.
If you’re building a wallet, DeFi app, or any application that requires displaying balances, transaction history, or smart contract interactions, relying solely on RPC methods can be challenging. Just as there’s no direct RPC method to retrieve token balances, there’s also no simple way to fetch all transactions associated with a wallet, especially for ERC-20, ERC-721, or ERC-1155 token transfers.
However, by using the Data API, you can retrieve all token transfers for a given wallet **with a single API call**, making the process much more efficient. This approach simplifies tracking and displaying wallet activity without the need to manually scan the entire blockchain.
Below are two examples that demonstrate the power of the Data API: in the first, it returns all ERC transfers, including ERC-20, ERC-721, and ERC-1155 tokens, and in the second, it shows all internal transactions, such as when one contract interacts with another.
The [Data API](/docs/api-reference/data-api), along with the [Metrics API](/docs/api-reference/metrics-api), are the engines behind the [Avalanche Explorer](https://subnets.avax.network/stats/) and the [Core wallet](https://core.app/en/). They are used to display transactions, logs, balances, NFTs, and more. The data and visualizations presented are all powered by these APIs, offering real-time and historical insights that are essential for building sophisticated, data-driven blockchain products.
### Features
* **Extensive L1 Support**: Gain access to data from over 100+ L1s across both mainnet and testnet. If an L1 is listed on the [Avalanche Explorer](https://subnets.avax.network/), you can query its data using the Data API.
* **Transactions and UTXOs**: easily retrieve details related to transactions, UTXOs, and token transfers from Avalanche EVMs, Ethereum, and Avalanche's Primary Network - the P-Chain, X-Chain and C-Chain.
* **Blocks**: retrieve latest blocks and block details
* **Balances**: fetch balances of native, ERC-20, ERC-721, and ERC-1155 tokens along with relevant metadata.
* **Tokens**: augment your user experience with asset details.
* **Staking**: get staking related data for active and historical validations.
### Supported Chains
Avalanche’s architecture supports a diverse ecosystem of interconnected L1 blockchains, each operating independently while retaining the ability to seamlessly communicate with other L1s within the network. Central to this architecture is the Primary Network—Avalanche’s foundational network layer, which all validators are required to validate prior to [ACP-77](/docs/acps/77-reinventing-subnets). The Primary Network runs three essential blockchains:
* The Contract Chain (C-Chain)
* The Platform Chain (P-Chain)
* The Exchange Chain (X-Chain)
However, with the implementation of [ACP-77](/docs/acps/77-reinventing-subnets), this requirement will change. Subnet Validators will be able to operate independently of the Primary Network, allowing for more flexible and affordable Subnet creation and management.
The **Data API** supports a wide range of L1 blockchains (**over 100**) across both **mainnet** and **testnet**, including popular ones like Beam, DFK, Lamina1, Dexalot, Shrapnel, and Pulsar. In fact, every L1 you see on the [Avalanche Explorer](https://explorer.avax.network/) can be queried through the Data API. This list is continually expanding as we keep adding more L1s. For a full list of supported chains, visit [List chains](/docs/api-reference/data-api/evm-chains/supportedChains).
#### The Contract Chain (C-Chain)
The C-Chain is an implementation of the Ethereum Virtual Machine (EVM). The primary network endpoints only provide information related to C-Chain atomic memory balances and import/export transactions. For additional data, please reference the [EVM APIs](/docs/rpcs/c-chain/rpc).
#### The Platform Chain (P-Chain)
The P-Chain is responsible for all validator and L1-level operations. The P-Chain supports the creation of new blockchains and L1s, the addition of validators to L1s, staking operations, and other platform-level operations.
#### The Exchange Chain (X-Chain)
The X-Chain is responsible for operations on digital smart assets known as Avalanche Native Tokens. A smart asset is a representation of a real-world resource (for example, equity, or a bond) with sets of rules that govern its behavior, like "can’t be traded until tomorrow." The X-Chain supports the creation and trade of Avalanche Native Tokens.
| Feature | Description |
| :--------------- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Chains** | Utilize this endpoint to retrieve the Primary Network chains that an address has transaction history associated with. |
| **Blocks** | Blocks are the container for transactions executed on the Primary Network. Retrieve the latest blocks, a specific block by height or hash, or a list of blocks proposed by a specified NodeID on Primary Network chains. |
| **Vertices** | Prior to Avalanche Cortina (v1.10.0), the X-Chain functioned as a DAG with vertices rather than blocks. These endpoints allow developers to retrieve historical data related to that period of chain history. Retrieve the latest vertices, a specific vertex, or a list of vertices at a specific height from the X-Chain. |
| **Transactions** | Transactions are a user's primary form of interaction with a chain and provide details around their on-chain activity, including staking-related behavior. Retrieve a list of the latest transactions, a specific transaction, a list of active staking transactions for a specified address, or a list of transactions associated with a provided asset id from Primary Network chains. |
| **UTXOs** | UTXOs are fundamental elements that denote the funds a user has available. Get a list of UTXOs for provided addresses from the Primary Network chains. |
| **Balances** | User balances are an essential function of the blockchain. Retrieve balances related to the X and P-Chains, as well as atomic memory balances for the C-Chain. |
| **Rewards** | Staking is the process where users lock up their tokens to support a blockchain network and, in return, receive rewards. It is an essential part of proof-of-stake (PoS) consensus mechanisms used by many blockchain networks, including Avalanche. Using the Data API, you can easily access pending and historical rewards associated with a set of addresses. |
| **Assets** | Get asset details corresponding to the given asset id on the X-Chain. |
#### EVM
The C-Chain is an instance of the Coreth Virtual Machine, and many Avalanche L1s are instances of the *Subnet-EVM*, which is a Virtual Machine (VM) that defines the L1 Contract Chains. *Subnet-EVM* is a simplified version of *Coreth VM* (C-Chain).
| Feature | Description |
| :--------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| **Chains** | There are a number of chains supported by the Data API. These endpoints can be used to understand which chains are included/indexed as part of the API and retrieve information related to a specific chain. |
| **Blocks** | Blocks are the container for transactions executed within the EVM. Retrieve the latest blocks or a specific block by height or hash. |
| **Transactions** | Transactions are a user's primary form of interaction with a chain and provide details around their on-chain activity. These endpoints can be used to retrieve information related to specific transaction details, internal transactions, contract deployments, specific token standard transfers, and more! |
| **Balances** | User balances are an essential function of the blockchain. Easily retrieve native token, collectible, and fungible token balances related to an EVM chain with these endpoints. |
#### Operations
The Operations API allows users to easily access their on-chain history by creating transaction exports returned in a CSV format. This API supports EVMs as well as non-EVM Primary Network chains.
# Rate Limits (/docs/api-reference/data-api/rate-limits)
---
title: Rate Limits
description: Rate Limits for the Data API
icon: Clock
---
Rate limiting is managed through a weighted scoring system, known as Compute Units (CUs). Each API request consumes a specified number of CUs, determined by the complexity of the request. This system is designed to accommodate basic requests while efficiently handling more computationally intensive operations.
## Rate Limit Tiers
The maximum CUs (rate-limiting score) for a user depends on their subscription level and is delineated in the following table:
| Subscription Level | Per Minute Limit (CUs) | Per Day Limit (CUs) |
| :----------------- | :--------------------- | :------------------ |
| Unauthenticated | 6,000 | 1,200,000 |
| Free | 8,000 | 2,000,000 |
| Base | 10,000 | 3,750,000 |
| Growth | 14,000 | 11,200,000 |
| Pro | 20,000 | 25,000,000 |
To update your subscription level use the [AvaCloud Portal](https://app.avacloud.io/)