Primitives / Transaction Batching
Scalability Blockchain Primitive

Transaction Batching

Combining multiple operations into single transactions to reduce costs and improve efficiency

What is Batching?

Transaction batching is a fundamental optimization technique that combines multiple blockchain operations into a single transaction, dramatically reducing the overhead costs associated with individual submissions. Every blockchain transaction carries fixed costs regardless of payload size, including base gas fees, signature verification, and state root updates. By bundling many operations together, these fixed costs are amortized across all included operations, making each individual action significantly cheaper than it would be in isolation.

The economic benefits of batching become substantial at scale. A single Ethereum transaction costs at minimum 21,000 gas just to exist, before any actual computation occurs. When a protocol needs to distribute tokens to thousands of recipients, sending individual transactions would multiply this base cost thousands of times. Batching these distributions into consolidated transactions can reduce total costs by 90% or more, transforming economically unviable operations into practical ones.

Beyond cost savings, batching improves network efficiency by reducing the number of discrete transactions competing for block space. During periods of high congestion, fewer total transactions mean faster processing for everyone. Batching also simplifies state management for applications, as related operations execute atomically within a single transaction rather than requiring coordination across multiple asynchronous submissions that might partially fail.

How Batching Works

The most common batching mechanism is the multicall pattern, where a smart contract accepts an array of encoded function calls and executes them sequentially within a single transaction. The caller prepares their operations as calldata, bundles them into an array, and submits everything at once. The multicall contract iterates through each call, executing them against the appropriate targets and collecting results. If any call fails, the entire batch can either revert atomically or continue with partial success depending on the implementation.

Aggregation services extend this pattern by collecting operations from multiple users. Rather than each user submitting their own transaction, they submit signed messages to an aggregator who bundles many users’ operations together. The aggregator pays the gas upfront and distributes costs across participants or earns fees for the service. This approach is particularly powerful for operations like token claims, where thousands of users might want to claim simultaneously. The aggregator can batch all claims into a handful of transactions, drastically reducing costs compared to individual submissions.

Under the hood, batching leverages the fact that smart contract execution is already inherently batched at the block level. All transactions in a block share context and execute against the same initial state. Batching simply moves this aggregation from the block level to the transaction level, capturing efficiency gains that would otherwise be lost to per-transaction overhead. The tradeoff is increased transaction complexity and gas limits that constrain how many operations can fit in a single batch.

Batching in Rollups

Rollups implement batching at a fundamental architectural level, making it central to their scaling strategy rather than an optional optimization. Every rollup collects user transactions, executes them off-chain, and posts compressed results to the base layer in batches. This batching is what enables rollups to achieve orders of magnitude better throughput than Layer 1. The fixed costs of posting to Ethereum, particularly data availability costs, are spread across hundreds or thousands of individual user operations.

Data compression amplifies the benefits of rollup batching. Rather than posting full transaction data, rollups employ sophisticated compression techniques that exploit redundancy across batched transactions. Common patterns like identical destination addresses, repeated token contracts, or similar value amounts compress efficiently when batched together. Some rollups achieve 10x or better compression ratios, meaning that the marginal cost of including an additional transaction in an existing batch approaches the cost of just its unique data.

The sequencer, the component responsible for ordering and batching rollup transactions, makes continuous tradeoffs between latency and efficiency. Waiting longer to accumulate more transactions improves batching efficiency but delays confirmation for early submitters. Most rollups batch on time intervals, transaction count thresholds, or some combination, attempting to balance user experience with cost optimization. The economic structure of rollups means that periods of high activity actually reduce per-transaction costs, as more transactions share the fixed posting overhead.

User-Level Batching

Multicall contracts have become standard infrastructure on EVM chains, enabling users to execute complex multi-step operations atomically. The classic example is the approve-and-swap pattern. Without batching, swapping tokens requires first approving the exchange to spend your tokens, waiting for confirmation, then executing the swap in a separate transaction. With multicall, both operations execute in a single transaction, saving gas, reducing confirmation time, and eliminating the risk of approval-swap race conditions.

Account abstraction elevates batching from a power user feature to a foundational capability. Smart contract wallets can batch arbitrary operations natively, combining what would traditionally require multiple transactions into unified UserOperations. A user might deposit into a lending protocol, borrow against their deposit, and swap the borrowed assets in a single atomic action. Paymasters can even sponsor gas for these batched operations, completely abstracting transaction management from the user’s perspective.

Developer tooling has evolved to make batching accessible without deep protocol knowledge. Libraries like ethers.js and viem provide multicall utilities that automatically batch read operations. DeFi aggregators batch swaps across multiple pools. Wallet interfaces increasingly present batched operations as single actions rather than exposing the underlying transaction complexity. This progressive abstraction means that users benefit from batching efficiency without needing to understand the mechanism.

Batching Trade-offs

Atomicity is perhaps the most significant consideration when designing batched operations. When operations execute together atomically, any failure reverts the entire batch. This can be desirable when operations must succeed together, like a flash loan that borrows, arbitrages, and repays. However, it can be problematic when independent operations are bundled and one failure causes unrelated operations to revert. Some multicall implementations offer try-catch semantics that allow partial success, but this introduces complexity in handling mixed results.

Latency increases with aggressive batching strategies. Waiting to accumulate a larger batch delays execution for operations submitted early in the batching window. For time-sensitive operations like arbitrage or liquidations, this latency is unacceptable. For routine operations like reward claims or periodic rebalancing, waiting a few extra seconds or minutes is worthwhile for the cost savings. Applications must match their batching strategy to their latency requirements, sometimes maintaining multiple pathways for different operation types.

Implementation complexity grows as batching becomes more sophisticated. Simple multicall is straightforward, but advanced patterns like cross-contract batching, conditional execution, or multi-user aggregation require careful design to handle edge cases securely. Gas estimation becomes harder when transaction contents vary based on batch composition. Debugging failed batched transactions is more challenging than debugging simple individual calls. Teams must weigh these complexity costs against batching benefits, sometimes choosing simpler approaches that capture most of the value without all the engineering overhead.

Related Primitives