
* fix(config): explicit Provider.Enabled flag Adds missing config option described in https://github.com/ipfs/kubo/issues/10803 * refactor: remove Experimental.StrategicProviding removing experiment, replaced with Provider.Enabled * test(cli): routing [re]provide updated and added tests for manually triggering provide and reprovide and making them respect global configuration flag to avoid inconsistent behaviors * docs: improve DelegatedRouters * refactor: default DefaultProviderWorkerCount=16 - simplified default for both - 16 is safer for non-accelerated DHT client - acceletated DHT performs better without limit anyway - updated docs
19 KiB
Kubo changelog v0.35
This release was brought to you by the Shipyard team.
v0.35.0
- Overview
- 🔦 Highlights
- Opt-in HTTP Retrieval client
- Dedicated
Reprovider.Strategy
for MFS - Experimental support for MFS as a FUSE mount point
- Grid view in WebUI
- Enhanced DAG-Shaping Controls
Datastore
Metrics Now Opt-In- Improved performance of data onboarding
- New
Bitswap
configuration options - New
Routing
configuration options - New Pebble database format config
- New environment variables
- 📦️ Important dependency updates
- 📝 Changelog
- 👨👩👧👦 Contributors
Overview
This release brings significant UX and performance improvements to data onboarding, provisioning, and retrieval systems.
New configuration options let you customize the shape of UnixFS DAGs generated during the data import, control the scope of DAGs announced on the Amino DHT, select which delegated routing endpoints are queried, and choose whether to enable HTTP retrieval alongside Bitswap over Libp2p.
Continue reading for more details.
🔦 Highlights
Opt-in HTTP Retrieval client
This release adds experimental support for retrieving blocks directly over HTTPS (HTTP/2), complementing the existing Bitswap over Libp2p.
The opt-in client enables Kubo to use delegated routing results with /tls/http
multiaddrs, connecting to HTTPS servers that support Trustless HTTP Gateway's Block Responses (?format=raw
, application/vnd.ipld.raw
). Fetching blocks via HTTPS (HTTP/2) simplifies infrastructure and reduces costs for storage providers by leveraging HTTP caching and CDNs.
To enable this feature for testing and feedback, set:
$ ipfs config --json HTTPRetrieval.Enabled true
See HTTPRetrieval
for more details.
Dedicated Reprovider.Strategy
for MFS
The Mutable File System (MFS) in Kubo is a UnixFS filesystem managed with ipfs files
commands. It supports familiar file operations like cp and mv within a folder-tree structure, automatically updating a MerkleDAG and a "root CID" that reflects the current MFS state. Files in MFS are protected from garbage collection, offering a simpler alternative to ipfs pin
. This makes it a popular choice for tools like IPFS Desktop and the WebUI.
Previously, the pinned
reprovider strategy required manual pin management: each dataset update meant pinning the new version and unpinning the old one. Now, new strategies—mfs
and pinned+mfs
—let users limit announcements to data explicitly placed in MFS. This simplifies updating datasets and announcing only the latest version to the Amino DHT.
Users relying on the pinned
strategy can switch to pinned+mfs
and use MFS alone to manage updates and announcements, eliminating the need for manual pinning and unpinning. We hope this makes it easier to publish just the data that matters to you.
See Reprovider.Strategy
for more details.
Experimental support for MFS as a FUSE mount point
The MFS root (filesystem behind the ipfs files
API) is now available as a read/write FUSE mount point at Mounts.MFS
. This filesystem is mounted in the same way as Mounts.IPFS
and Mounts.IPNS
when running ipfs mount
or ipfs daemon --mount
.
Note that the operations supported by the MFS FUSE mountpoint are limited, since MFS doesn't store file attributes.
See Mounts
and docs/fuse.md
for more details.
Grid view in WebUI
The WebUI, accessible at http://127.0.0.1:5001/webui/, now includes support for the grid view on the Files screen:
Enhanced DAG-Shaping Controls
This release advances CIDv1 support by introducing fine-grained control over UnixFS DAG shaping during data ingestion with the ipfs add
command.
Wider DAG trees (more links per node, higher fanout, larger thresholds) are beneficial for large files and directories with many files, reducing tree depth and lookup latency in high-latency networks, but they increase node size, straining memory and CPU on resource-constrained devices. Narrower trees (lower link count, lower fanout, smaller thresholds) are preferable for smaller directories, frequent updates, or low-power clients, minimizing overhead and ensuring compatibility, though they may increase traversal steps for very large datasets.
Kubo now allows users to act on these tradeoffs and customize the width of the DAG created by ipfs add
command.
New DAG-Shaping ipfs add
Options
Three new options allow you to override default settings for specific import operations:
--max-file-links
: Sets the maximum number of child links for a single file chunk.--max-directory-links
: Defines the maximum number of child entries in a "basic" (single-chunk) directory.- Note: Directories exceeding this limit or the
Import.UnixFSHAMTDirectorySizeThreshold
are converted to HAMT-based (sharded across multiple blocks) structures.
- Note: Directories exceeding this limit or the
--max-hamt-fanout
: Specifies the maximum number of child nodes for HAMT internal structures.
Persistent DAG-Shaping Import.*
Configuration
You can set default values for these options using the following configuration settings:
Import.UnixFSFileMaxLinks
Import.UnixFSDirectoryMaxLinks
Import.UnixFSHAMTDirectoryMaxFanout
Import.UnixFSHAMTDirectorySizeThreshold
Updated DAG-Shaping Import
Profiles
The release updated configuration profiles to incorporate these new Import.*
settings:
- Updated Profile:
test-cid-v1
now includes current defaults as explicitImport.UnixFSFileMaxLinks=174
,Import.UnixFSDirectoryMaxLinks=0
,Import.UnixFSHAMTDirectoryMaxFanout=256
andImport.UnixFSHAMTDirectorySizeThreshold=256KiB
- New Profile:
test-cid-v1-wide
adopts experimental directory DAG-shaping defaults, increasing the maximum file DAG width from 174 to 1024, HAMT fanout from 256 to 1024, and raising the HAMT directory sharding threshold from 256KiB to 1MiB, aligning with 1MiB file chunks.- Feedback: Try it out and share your thoughts at discuss.ipfs.tech/t/should-we-profile-cids or ipfs/specs#499.
Tip
Apply one of CIDv1 test profiles with
ipfs config profile apply test-cid-v1[-wide]
.
Datastore
Metrics Now Opt-In
To reduce overhead in the default configuration, datastore metrics are no longer enabled by default when initializing a Kubo repository with ipfs init
.
Metrics prefixed with <dsname>_datastore
(e.g., flatfs_datastore_...
, leveldb_datastore_...
) are not exposed unless explicitly enabled. For a complete list of affected default metrics, refer to prometheus_metrics_added_by_measure_profile
.
Convenience opt-in profiles can be enabled at initialization time with ipfs init --profile
: flatfs-measure
, pebbleds-measure
, badgerds-measure
It is also possible to manually add the measure
wrapper. See examples in Datastore.Spec
documentation.
Improved performance of data onboarding
This Kubo release significantly improves both the speed of ingesting data via ipfs add
and announcing newly produced CIDs to Amino DHT.
Fast ipfs add
in online mode
Adding a large directory of data when ipfs daemon
was running in online mode took a long time. A significant amount of this time was spent writing to and reading from the persisted provider queue. Due to this, many users had to shut down the daemon and perform data import in offline mode. This release fixes this known limitation, significantly improving the speed of ipfs add
.
Important
Performing
ipfs add
of 10GiB file would take about 30 minutes. Now it takes close to 30 seconds.
Kubo v0.34:
$ time kubo/cmd/ipfs/ipfs add -r /tmp/testfiles-100M > /dev/null
100.00 MiB / 100.00 MiB [=====================================================================] 100.00%
real 0m6.464s
$ time kubo/cmd/ipfs/ipfs add -r /tmp/testfiles-1G > /dev/null
1000.00 MiB / 1000.00 MiB [===================================================================] 100.00%
real 1m10.542s
$ time kubo/cmd/ipfs/ipfs add -r /tmp/testfiles-10G > /dev/null
10.00 GiB / 10.00 GiB [=======================================================================] 100.00%
real 24m5.744s
Kubo v0.35:
$ time kubo/cmd/ipfs/ipfs add -r /tmp/testfiles-100M > /dev/null
100.00 MiB / 100.00 MiB [=====================================================================] 100.00%
real 0m0.326s
$ time kubo/cmd/ipfs/ipfs add -r /tmp/testfiles-1G > /dev/null
1.00 GiB / 1.00 GiB [=========================================================================] 100.00%
real 0m2.819s
$ time kubo/cmd/ipfs/ipfs add -r /tmp/testfiles-10G > /dev/null
10.00 GiB / 10.00 GiB [=======================================================================] 100.00%
real 0m28.405s
Optimized, dedicated queue for providing fresh CIDs
From kubo
v0.33.0
,
Bitswap stopped advertising newly added and received blocks to the DHT. Since
then boxo/provider
is responsible for the first time provide and the recurring reprovide logic. Prior
to v0.35.0
, provides and reprovides were handled together in batches, leading
to delays in initial advertisements (provides).
Provides and Reprovides now have separate queues, allowing for immediate provide of new CIDs and optimised batching of reprovides.
New Provider
configuration options
This change introduces a new configuration options:
Provider.Enabled
is a global flag for disabling both Provider and Reprovider systems (announcing new/old CIDs to amino DHT).Provider.WorkerCount
for limiting the number of concurrent provide operations, allows for fine-tuning the trade-off between announcement speed and system load when announcing new CIDs.- Removed
Experimental.StrategicProviding
. Superseded byProvider.Enabled
,Reprovider.Interval
andReprovider.Strategy
.
Tip
Users who need to provide large volumes of content immediately should consider setting
Routing.AcceleratedDHTClient
totrue
. If that is not enough, consider adjustingProvider.WorkerCount
to a higher value.
Deprecated ipfs stats provider
Since the ipfs stats provider
command was displaying statistics for both
provides and reprovides, this command isn't relevant anymore after separating
the two queues.
The successor command is ipfs stats reprovide
, showing the same statistics,
but for reprovides only.
Note
ipfs stats provider
still works, but is marked as deprecated and will be removed in a future release. Be mindful that the command provides only statistics about reprovides (similar toipfs stats reprovide
) and not the new provide queue (this will be fixed as a part of wider refactor planned for a future release).
New Bitswap
configuration options
Bitswap.Libp2pEnabled
determines whether Kubo will use Bitswap over libp2p (both client and server).Bitswap.ServerEnabled
controls whether Kubo functions as a Bitswap server to host and respond to block requests.Internal.Bitswap.ProviderSearchMaxResults
for adjusting the maximum number of providers bitswap client should aim at before it stops searching for new ones.
New Routing
configuration options
Routing.IgnoreProviders
allows ignoring specific peer IDs when returned by the content routing system as providers of content.- Simplifies testing
HTTPRetrieval.Enabled
in setups where Bitswap over Libp2p and HTTP retrieval is served under different PeerIDs.
- Simplifies testing
Routing.DelegatedRouters
allows customizing HTTP routers used by Kubo whenRouting.Type
is set toauto
orautoclient
.- Users are now able to adjust the default routing system and directly query custom routers for increased resiliency or when dataset is too big and CIDs are not announced on Amino DHT.
Tip
For example, to use Pinata's routing endpoint in addition to IPNI at
cid.contact
:$ ipfs config --json Routing.DelegatedRouters '["https://cid.contact","https://indexer.pinata.cloud"]'
New Pebble database format config
This Kubo release provides node operators with more control over Pebble's FormatMajorVersion
. This allows testing a new Kubo release without automatically migrating Pebble datastores, keeping the ability to switch back to older Kubo.
When IPFS is initialized to use the pebbleds datastore (opt-in via ipfs init --profile=pebbleds
), the latest pebble database format is configured in the pebble datastore config as "formatMajorVersion"
. Setting this in the datastore config prevents automatically upgrading to the latest available version when Kubo is upgraded. If a later version becomes available, the Kubo daemon prints a startup message to indicate this. The user can them update the config to use the latest format when they are certain a downgrade will not be necessary.
Without the "formatMajorVersion"
in the pebble datastore config, the database format is automatically upgraded to the latest version. If this happens, then it is possible a downgrade back to the previous version of Kubo will not work if new format is not compatible with the pebble datastore in the previous version of Kubo.
When installing a new version of Kubo when "formatMajorVersion"
is configured, automatic repository migration (ipfs daemon with --migrate=true
) does not upgrade this to the latest available version. This is done because a user may have reasons not to upgrade the pebble database format, and may want to be able to downgrade Kubo if something else is not working in the new version. If the configured pebble database format in the old Kubo is not supported in the new Kubo, then the configured version must be updated and the old Kubo run, before installing the new Kubo.
See other caveats and configuration options at kubo/docs/datastores.md#pebbleds
New environment variables
The environment-variables.md
was extended with two new features:
Improved Log Output Setting
When stderr and/or stdout options are configured or specified by the GOLOG_OUTPUT
environ variable, log only to the output(s) specified. For example:
GOLOG_OUTPUT="stderr"
logs only to stderrGOLOG_OUTPUT="stdout"
logs only to stdoutGOLOG_OUTPUT="stderr+stdout"
logs to both stderr and stdout
New Repo Lock Optional Wait
The environment variable IPFS_WAIT_REPO_LOCK
specifies the amount of time to wait for the repo lock. Set the value of this variable to a string that can be parsed as a golang time.Duration
. For example:
IPFS_WAIT_REPO_LOCK="15s"
If the lock cannot be acquired because someone else has the lock, and IPFS_WAIT_REPO_LOCK
is set to a valid value, then acquiring the lock is retried every second until the lock is acquired or the specified wait time has elapsed.
📦️ Important dependency updates
- update
boxo
to v0.30.0 - update
ipfs-webui
to v4.7.0 - update
go-ds-pebble
to v0.5.0- update
pebble
to v2.0.3
- update
- update
go-libp2p-pubsub
to v0.13.1 - update
go-log
to v2.6.0