orochi logo
|
Pricing
Pricing
orochi logo

Be the first to know about the latest updates and launches.

Star us on Github

Follow us on

  • Product
  • zkDatabase
  • Orocle
  • Orand
  • zkMemory
  • zkDA Layer (TBA)
  • Pricing
  • Developers
  • Documents
  • RAMenPaSTA
  • Research
  • Support Center
  • npm Packages
  • Resources
  • Blog
  • Brand Assets
  • Case Studies (TBA)
  • Ecosystem
  • ONPlay
  • $ON Token
  • Become a Partner
  • Discover
  • About us
  • Contact Us
  • Orochian Onboarding

Privacy Policy

|

Terms of Service

|

© 2025 Orochi Network. All rights reserved.

ef6da26
Blog
>
Misc

Understanding Plonky2: A High-Performant Rust-Based SNARK Framework

November 4, 2025

8 mins read

Plonky2 is a standout framework we embraced in our ecosystem, combining the ability to unlock the scaling benefits of both SNARKS and STARKs. This blog offers you a deeper look into Plonky2 and its role within Orochi.

Understanding Plonky2: A High-Performant Rust-Based SNARK Framework

Orochi Template Blog (69).png
Plonky2 is a standout framework we embraced in our ecosystem, combining the ability to unlock the scaling benefits of both SNARKS and STARKs. As we continue to build on this foundation, this blog offers you a deeper look into Plonky2 and its role within Orochi.

What Is zk-SNARK?

Before diving into Plonky2, it’s important to first understand what zkSNARK is.
Zero-Knowledge Succinct Non-Interactive Arguments of Knowledge (zk-SNARKs) have emerged as a powerful cryptographic tool, enabling the verification of computations without revealing the underlying data via succinct cryptographic proofs. They find applications in various domains, including blockchain scalability, privacy-preserving computations, and secure multi-party computations.

Understanding Plonky2

Plonky2 is a cutting-edge SNARK framework implemented in Rust, designed for high performance and flexibility. It builds upon the TurboPLONK proving system but replaces the polynomial commitment scheme with one based on FRI (Fast Reed-Solomon Interactive Oracle Proofs). This innovative approach allows Plonky2 to achieve fast recursive composition and utilize a 64-bit field for improved efficiency.

Key features: 

  • Efficient 64-bit Goldilocks field operations
  • No trusted setup requirement
  • Post-quantum security
  • Custom gates for complex computations
  • Customizable trade-offs between proof size, proving time, and verification time

Plonky2's Underlying Technologies

TurboPLONK

TurboPLONK, an enhanced version of PLONK, offers significant advantages over traditional R1CS-based SNARKs by providing a more flexible and expressive framework for defining constraints. It enables the representation of a wider range of computations, allowing for more efficient implementations of complex operations like fixed-base scalar multiplication. Plonky2 leverages TurboPLONK's capabilities to implement custom gates for operations such as division, resulting in reduced circuit complexity and proof size. The efficiency gains from TurboPLONK's ability to express operations more concisely, coupled with advanced techniques like intermediate sum optimization and lookup methods, contribute directly to Plonky2's high performance.

FRI (Fast Reed-Solomon Interactive Oracle Proofs)

FRI is a protocol for testing proximity to Reed-Solomon codes. In Plonky2, it serves as the basis for the polynomial commitment scheme, replacing the Kate commitment scheme used in the original PLONK. FRI offers several advantages:
  • Support for any prime field with smooth subgroups
  • No requirement for a large-characteristic field
  • Enables the use of a 64-bit field in Plonky2
  • Post-quantum security

Plonky2 Design

1. Field Selection

Plonky2 operates in the prime field Fp, where $p = 2^{64} - 2^{32} + 1$, known as the Goldilocks field. This choice offers several benefits:
  • Elements fit within a 64-bit word, enabling efficient computations
  • The structure of p allows for efficient modular reduction
  • Reduces the need to keep constants in registers
For operations requiring a larger field for soundness, Plonky2 uses an extension field Fp(φ) ($\mathbb{F}_p(\phi)$)

2. PLONK Modifications

2.1 Custom Gates
Plonky2 leverages custom gates to implement complex operations efficiently in a single gate, this follows prior work like TurboPlonk. For example, division can be implemented using a custom gate: instead of writing q = x/y, we write q * y = x and y != 0, or equivalently each division gate can be represented as two constraints (q * y = x) and (y * i = 1) where i is the inverse of y. Here we call i the advice wire of the division gate.
Filtering constraints can then be used as an efficient way to enforce the constraints of a custom gate on only relevant rows of the execution trace, improving overall efficiency.
2.2 Advice Wires
Advice wires are used for non-deterministic values in the circuit, such as the purported inverse in division. By excluding them from the permutation argument, Plonky2 reduces the degree of the argument, leading to improved performance.
2.3 Cumulative Products
To handle high-degree constraints from the PLONK's permutation argument, Plonky2 breaks them down into lower-degree constraints using cumulative products. Specifically, instead of a single high-degree polynomial constraint, Plonky2 uses a series of 8-degree polynomial constraints for each 8 consecutive rows, making it easier to process the constraints in a batch of 8 rows. This approach significantly improves performance.

3. Soundness Analysis

Due to its small field size, Plonky2 provides a more detailed soundness analysis on the permutation argument and combining constraints. To improve soundness for certain subprotocols, they are repeated multiple times in parallel to boost the probability of catching any potential errors.

4. Zero-Knowledge

Plonky2 achieves zero-knowledge by blinding polynomials before padding to a power of two. The idea is to add randomized rows to the trace and connect them to the original trace using copy constraints, which effectively blinds the polynomials.

5.  Hashing for FRI commitments

Efficient hashing is crucial for the performance of FRI-based proof generation. Plonky2 uses Poseidon^π as its hash function, chosen for its efficiency in arithmetic circuits. The specific configuration of Poseidon^π in Plonky2 is optimized for performance and security.
5.1. Hashing in the Circuit
Plonky2 uses a single PoseidonGate to evaluate an entire instance of Poseidon^π, allowing for efficient arithmetization of hashes within the circuit, resulting in a wide trace of 135 columns. This design is considered to be a reasonable width for the trace, among trade-offs between using wider, shorter traces and narrower, longer traces.

6. Polynomial Testing

Plonky2 follows the approach of the DEEP-ALI protocol to construct a polynomial commitment scheme from the original FRI protocol. The PCS is used with a larger relative distance parameter δ for efficient recursion.

Optimizations

The Plonky2 implementation adopts several optimizations to improve efficiency:
1. Structure of the Trace
The trace in Plonky2 is structured as a disjoint-set forest, where each set consists of trace elements with the same value, enforced via copy constraints enforced by the permutation argument. This structure enables efficient management of copy constraints.
2. FRI Optimizations
Plonky2 implements various optimizations in the FRI protocol:
  • Minimal number of Merkle trees (e.g. combining preprocessed polynomials into a single Merkle tree)
  • Optimal reduction arities through exhaustive search that minimizes proof size. For recursion costs, Plonky2 used a fixed arity of 8 to make the verifier’s work more uniform.
  • Strategic placement of entire Oracle data blocks within a single Merkle tree leaf for efficient querying
  • Pruning of overlapping Merkle paths of the same queries
  • Use of Merkle caps to reduce a few hashes from each Merkle proof, while still keeping the path lengths fixed
  • Early termination of the reduction of the FRI protocol once the degree is small enough
  • Grinding (proof-of-work) to enhance security
3. Poseidon Optimizations
Specific optimizations for Poseidon in Plonky2 include:
  1. Use of a circulant MDS matrix with powers of two
  2. SIMD operations for parallelized hashing
  3. Hand-tuned assembly in x86-64 and ARM64 implementations

Final Protocol

The Plonky2 protocol can be summarized in the following steps:
  1. Preprocessing:
  • Circuit Construction
  • Computation of polynomials encoding constants and the extended permutation
  • Building of Merkle tree for proving and verification keys
  1. Main Protocol (Turbo-Plonk arguments + FRI-based PCS)
Here for brevity, we denote P as the Prover and V as the Verifier. The main protocol works as follows:
  • P generates and sends commitments to wire polynomials, permutation polynomials, partial product polynomials, and quotient polynomials
  • V samples randomness for permutation argument and combining constraints, as well as the evaluation point for the FRI-based PCS protocol
  • P sends evaluations of polynomials at the evaluation point
  • P and Verifier use the FRI protocol to verify these openings
  • V verifies the Plonky2 argument equations
The non-interactive variant is obtained by applying the Fiat-Shamir transform.

Evaluation

Plonky2's performance can be characterized by three key metrics:
  1. Recursion Threshold
  • Depends on the codeword rate and desired security level
  • With codeword rate 1/8 and 100 bits of security: 2^12 gates threshold
  1. Proof Generation Time:
  • When optimized for speed (1/8 codeword rate): ~300ms on 2021 Macbook Air for recursive proofs
  • When optimized for size (1/256 codeword rate): ~11.6 seconds on 2021 Macbook Air
  1. Proof Size
  • When optimized for size (1/256 codeword rate): ~43 kilobytes per proof
Plonky2 has powered systems such as Polygon Zero's zkEVM, which is a prominent zero-knowledge virtual machine for Ethereum. Thanks to its high performance and flexible customizability, Plonky2 enables efficient zero-knowledge proofs for complex computations, making it a valuable tool for advancing the capabilities of blockchain technology.

Conclusion

Plonky2 represents a major leap forward in SNARK technology, combining a 64-bit field, efficient hashing, custom gates, and various optimizations to achieve fast recursive composition and compact proof sizes. Its design provides the flexibility needed to enhance blockchain scalability, enable privacy-preserving computations, and support secure multi-party computations. This makes Plonky2 an ideal framework for applications that require efficient and robust zero-knowledge proofs, offering exciting new possibilities in the field of cryptography.

About Orochi Network

Orochi Network is the world's first zkDA Layer recognized by the Ethereum Foundation. By leveraging Zero-Knowledge Proofs (ZKPs), Orochi ensures data integrity, security, and interoperability, empowering developers with the tools to overcome the limitations of on-chain execution and scalability in Web3. At the core, Orochi offers the world's first verifiable database designed for enterprises, AI/ML, zkML, zkVMs, verifiable computation, Web3 applications, and more.

Share via

facebook-icontelegram-icon
Understanding Plonky2: A High-Performant Rust-Based SNARK FrameworkWhat Is zk-SNARK?Understanding Plonky2Key features:Plonky2's Underlying TechnologiesTurboPLONKFRI (Fast Reed-Solomon Interactive Oracle Proofs)Plonky2 Design1. Field Selection2. PLONK Modifications3. Soundness Analysis4. Zero-Knowledge5.  Hashing for FRI commitments6. Polynomial TestingOptimizationsFinal ProtocolEvaluationConclusionAbout Orochi Network
Experience verifiable data in action - Join the zkDatabase live demo!
Book a Demo

More posts

blog card

Orochi x Asseto | Partnership Announcement

Partnership

blog card

Research | $1.8 Trillion Losses - Why Do Institutions Still Trust Unverifiable Data?

Research

blog card

Data Provenance and Integrity in Tokenized Markets: Why Privacy-Preserving, Verifiable Inputs Decide RWA Success in 2025–2026

Research

blog card

The Evolution of Databases: From SQL to zkDatabase

Research

blog card

Low-Cost ZK Rollups | How Orochi Optimizes Data Proof Scalability ?

Research

blog card

What is Orochi Network ?

Orochi Essentials

Top Post

Related to this category

blog card

Understanding Timestamp Dependence in Blockchain: Impact and Solutions

Research

blog card

Hedging Strategies: A Deep Dive into Methods  in the Web3 Market

Research

blog card

Expose Market Makers Method: Why Most Tokens Trend To Zero?

Research

blog card

Analyzing the Key Vulnerability in Polygon's zkEVM

Zero-Knowledge Proofs

Misc

blog card

Secrets of Crypto VCs in Fundraising: What You're Missing

Research

blog card

Behind the Numbers of Bitcoin's Market Behavior

Research