A simple squid processor indexing Orca
Here we look into an Squid SDK indexer (a squid) that gets the data about Orca Exchange NFTs, their transfers and owners.
Pre-requisites: Node.js v20 or newer, Git, Docker.
Begin by retrieving the template and installing dependencies:
First, we inspect the data available for indexing.
SQD provides a tool (@subsquid/solana-typegen
) for retrieving program IDLs off the chain and generating boilerplate ABI code for data decoding. It supports IDLs generated by the Anchor framework and Shank.
Generating the ABI code for Whirlpool is done with
Here, src/abi
is the destination folder and the whirlpool
suffix sets the base name for the generated file.
At the src/abi/whirlpool/instructions.ts
we find exports of instruction
instances for every instruction in the Whirlpool program. Here’s one for the swap
instruction:
Here, d8
are the eight bytes that the relevant instruction data starts with. In the next section we’ll use this discriminator to request pre-filtered swap
data from the Soldexer API.
“Data source” is a component that defines what data should be retrieved and where to get it. Here’s how we configure it to retrieve the swap
instruction data of the Whirlpool program:
Here,
'https://portal.sqd.dev/datasets/solana-mainnet'
is the address of the public SQD Network portal for Solana mainnet.
300_000_000
is first Solana slot to be indexed by the processor.
The argument of addInstruction()
is a set of filters that tells the processor to retrieve all data on Whirlpool program instructions with discriminator matching the hash of the <namespace>:<instruction>
of the swap
instruction.
It also instructs the processor to fetch some related data: inner instructions, parent transactions and token balance updates due to these transactions.
Aside from instructions, it’s also possible to request transactions, execution logs, token balance updates for SOL and other tokens and reward records, plus the related data for each of these data item types.
Next, we fetch the filtered data from the Soldexer API / SQD Portal, transform it and save the result to our destination of choice - a Postgres DB.
The run
function from the @subsquid/batch-processor
package fetches batches of data from the Soldexer API and runs a transformation function (called batch handler) on each batch. Here’s how it’s called:
Here,
dataSource
is the data source object described in the previous sectiondatabase
is a compatible data sink implementationasync ctx => { ... }
is the batch definitionctx
is a batch context object that exposes a batch of data (at ctx.blocks
) and any data persistence facilities derived from database
(at ctx.store
)And here is how this call looks in our Whirlpool data processor:
This goes through all the instructions in the block, verifies that they indeed are swap
instruction from the Whirlpool program and decodes the data of each inner instruction.
Then it retrieves the info on the exchange from decoded inner instructions, including input and output tokens, source and destination accounts and amounts.
The decoding is done with the tokenProgram.instructions.transfer.decode
function from the Typescript ABI provided in the template.
At this point the squid is ready for its first test run. Execute
You can verify that the data is being stored in the database by running
The full code can be found here.
A simple squid processor indexing Orca
Here we look into an Squid SDK indexer (a squid) that gets the data about Orca Exchange NFTs, their transfers and owners.
Pre-requisites: Node.js v20 or newer, Git, Docker.
Begin by retrieving the template and installing dependencies:
First, we inspect the data available for indexing.
SQD provides a tool (@subsquid/solana-typegen
) for retrieving program IDLs off the chain and generating boilerplate ABI code for data decoding. It supports IDLs generated by the Anchor framework and Shank.
Generating the ABI code for Whirlpool is done with
Here, src/abi
is the destination folder and the whirlpool
suffix sets the base name for the generated file.
At the src/abi/whirlpool/instructions.ts
we find exports of instruction
instances for every instruction in the Whirlpool program. Here’s one for the swap
instruction:
Here, d8
are the eight bytes that the relevant instruction data starts with. In the next section we’ll use this discriminator to request pre-filtered swap
data from the Soldexer API.
“Data source” is a component that defines what data should be retrieved and where to get it. Here’s how we configure it to retrieve the swap
instruction data of the Whirlpool program:
Here,
'https://portal.sqd.dev/datasets/solana-mainnet'
is the address of the public SQD Network portal for Solana mainnet.
300_000_000
is first Solana slot to be indexed by the processor.
The argument of addInstruction()
is a set of filters that tells the processor to retrieve all data on Whirlpool program instructions with discriminator matching the hash of the <namespace>:<instruction>
of the swap
instruction.
It also instructs the processor to fetch some related data: inner instructions, parent transactions and token balance updates due to these transactions.
Aside from instructions, it’s also possible to request transactions, execution logs, token balance updates for SOL and other tokens and reward records, plus the related data for each of these data item types.
Next, we fetch the filtered data from the Soldexer API / SQD Portal, transform it and save the result to our destination of choice - a Postgres DB.
The run
function from the @subsquid/batch-processor
package fetches batches of data from the Soldexer API and runs a transformation function (called batch handler) on each batch. Here’s how it’s called:
Here,
dataSource
is the data source object described in the previous sectiondatabase
is a compatible data sink implementationasync ctx => { ... }
is the batch definitionctx
is a batch context object that exposes a batch of data (at ctx.blocks
) and any data persistence facilities derived from database
(at ctx.store
)And here is how this call looks in our Whirlpool data processor:
This goes through all the instructions in the block, verifies that they indeed are swap
instruction from the Whirlpool program and decodes the data of each inner instruction.
Then it retrieves the info on the exchange from decoded inner instructions, including input and output tokens, source and destination accounts and amounts.
The decoding is done with the tokenProgram.instructions.transfer.decode
function from the Typescript ABI provided in the template.
At this point the squid is ready for its first test run. Execute
You can verify that the data is being stored in the database by running
The full code can be found here.