This article provides a guide on how to effectively upload a metadata directory to the Web3.Storage IPFS gateway. For a comprehensive understanding of how IPFS can enhance your project's scalability as a metadata endpoint, we recommend reading our IPFS performance article.
Metadata directory compatible with Immutable metadata schema
Web3 Storage API Key
Access to a GitHub code repository
Uploading a large directory of metadata files using simple methods can lead to difficulties. When uploading more than 10k files, both the GUI and the API may encounter issues. In this article, we will share the most efficient method we have found to upload a directory to the Web3.Storage gateway service.
To interact with Web3 Storage, we will utilize the web3.storage node package. You can find a working example in the following GitHub repository.
For demonstration purposes, we have prepared five metadata directories of various sizes (10k, 20k, 80k, 120k, 200k) using the
prepare_dummy_metadata.py script from the repository mentioned above.
Step 1: Prepare the Tools
mv .env.example .env
Add the Web3 Storage API key to the .env file.
Step 2: Run the script
node web3_storage/uploadDirectory.js <PathToMetadataDir>
Optionally, adjust the retry parameters if you encounter frequent rate limiting issues.
const MAX_RETRIES = 3; // Maximum number of retry attempts
const RETRY_DELAY = 2000; // Delay in milliseconds between retries
Step 3: Access the directory via the gateway specific URL
Here are the dummy uploads we generated for our performance test:
There is a built in retry mechanism to avoid timeouts due to rate limiting, here is an example of an output for a directory of 80k files. There was a timeout but the process completed
Alternatively, you can explore the method described in the following [Link to Pinata article]. However, please note that the web3.storage service requires prior approval to enable mass pinning functionality via the API token.