Batching JSON-RPC requests
JSON RPC request batching allows you to send several JSON-RPC method invocations within a single HTTPS call. A batch request consists of an array of individual JSON-RPC requests. Batching JSON-RPC requests brings performance improvements, by reducing the network latency of performing several HTTP requests you would otherwise do.
You can use batching:
- directly, by sending an HTTPS request that contains an array of individual JSON RPC calls,
- using Ethers, by instantiating a
JsonRpcBatchProvider
- using Viem, by configuring the
batching
parameter when creating a client viacreatePublicClient
.
The result of this operation is an array of responses, in the order that corresponds to each individual request from the batch.
If a particular request fails:
- The batch request will return
200 OK
HTTP status - The response object corresponding to the failing JSON-RPC call will be failing.
Request batching is supported over HTTPS
only. There is no batching support over WSS
currently.
Example: direct batching
This example shows low-level request batching by issuing an array of JSON RPC methods calls.
curl https://mainnet.gateway.tenderly.co/$TENDERLY_NODE_ACCESS_KEY \
-X POST \
-H "Content-Type: application/json" \
-d '[
{"jsonrpc": "2.0", "id": 1, "method": "eth_blockNumber", "params": []},
{"jsonrpc": "2.0", "id": 2, "method": "eth_accounts", "params": []},
{"jsonrpc":"2.0","id":3,"method":"eth_getBalance","params":["0xd8da6bf26964af9d7eed9e03e53415d37aa96045","latest"]},
{"jsonrpc":"2.0","id":4,"method":"eth_getStorageAt","params":["0xc02aaa39b223fe8d0a0e5c4f27ead9083c756cc2","0x3a988d762a24303c37d08f1543db6143453b579691d5c20fed39629ff1334cca","latest"]},
{"jsonrpc":"2.0","id":0,"method":"eth_gasPrice","params":[]},
{"jsonrpc": "2.0", "id": 3, "method": "tenderly_simulateTransaction", "params": []}
]'
Example: Viem
Viem makes batching possible at the level of the client, configured to perform batching
.
import { createPublicClient, http } from "viem";
import { mainnet } from "viem/chains";
const client = createPublicClient({
chain: mainnet,
transport: http(
`https://mainnet.gateway.tenderly.co/${process.env.TENDERLY_NODE_ACCESS_KEY}`,
{
batch: true,
}
),
});
const [blockNumber, balance, ensName] = await Promise.all([
client.getBlockNumber(),
client.getBalance({ address: "0xd2135CfB216b74109775236E36d4b433F1DF507B" }),
client.getEnsName({ address: "0xd2135CfB216b74109775236E36d4b433F1DF507B" }),
client.getChainId(),
]);
console.log("Results are in");
console.log({ blockNumber, balance, ensName });
// So will this
const bnPromise = client.getBlockNumber();
const balancePromise = client.getBalance({
address: "0xd2135CfB216b74109775236E36d4b433F1DF507B",
});
console.log(await bnPromise);
console.log(await balancePromise);
Limits and pricing
The total usage footprint of a batch is expressed in TU (Tenderly Units), and it’s the sum of TU of all requests within the batch, according to the pricing.
The batch-request usage is subject to the monthly quota and rate limiting.