Description
go-ipfs 0.10.0 Release
We're happy to announce go-ipfs 0.10.0. This release brings some big changes to the IPLD internals of go-ipfs that make working with non-UnixFS DAGs easier than ever. There are also a variety of new commands and configuration options available.
As usual, this release includes important fixes, some of which may be critical for security. Unless the fix addresses a bug being exploited in the wild, the fix will not be called out in the release notes. Please make sure to update ASAP. See our release process for details.
🛠 TLDR: BREAKING CHANGES
ipfs dag get
- default output changed to
dag-json
- dag-pb (e.g. unixfs) field names changed - impacts userland code that works with
dag-pb
objects returned bydag get
- no longer emits an additional new-line character at the end of the data output
- default output changed to
ipfs dag put
- defaults changed to reduce ambiguity and surprises: input is now assumed to be
dag-json
, and data is serialized todag-cbor
at rest. --format
and--input-enc
were removed and replaced with--store-codec
and--input-codec
- codec names now match the ones defined in the multicodec table
- dag-pb (e.g. unixfs) field names changed - impacts userland code that works with
dag-pb
objects stored viadag put
- defaults changed to reduce ambiguity and surprises: input is now assumed to be
Keep reading to learn more details.
🔦 Highlights
🌲 IPLD Levels Up
The handling of data serialization as well as many aspects of DAG traversal and pathing have been migrated from older libraries, including go-merkledag and go-ipld-format to the new go-ipld-prime library and its components. This allows us to use many of the newer tools afforded by go-ipld-prime, stricter and more uniform codec implementations, support for additional (pluggable) codecs, and some minor performance improvements.
This is significant refactor of a core component that touches many parts of IPFS, and does come with some breaking changes:
- IPLD plugins:
- The
PluginIPLD
interface has been changed to utilize go-ipld-prime. There is a demonstration of the change in the bundled git plugin.
- The
- The semantics of
dag put
anddag get
change:dag get
now takes theoutput-codec
option which accepts a multicodec name used to encode the output. By default this isdag-json
, which is a strict and deterministic subset of JSON created by the IPLD team. Users may notice differences from the previously plain Go JSON output, particularly where bytes are concerned which are now encoded using a form similar to CIDs:{"/":{"bytes":"unpadded-base64-bytes"}}
rather than the previously Go-specific plain padded base64 string. See the dag-json specification for an explanation of these forms.dag get
no longer prints an additional new-line character at the end of the encoded block output. This means that the output as presented bydag get
are the exact bytes of the requested node. A round-trip of such bytes back in throughdag put
using the same codec should result in the same CID.dag put
uses theinput-codec
option to specify the multicodec name of the format data is being provided in, and thestore-codec
option to specify the multicodec name of the format the data should be stored in at rest. These formerly defaulted tojson
andcbor
respectively. They now default todag-json
anddag-cbor
respectively but may be changed to any supported codec (bundled or loaded via plugin) by its multicodec name.- The
json
andcbor
multicodec names (as used byinput-enc
andformat
options) are now no longer aliases fordag-json
anddag-cbor
respectively. Instead, they now refer to their proper multicodec types.cbor
refers to a plain CBOR format, which will not encode CIDs and does not have strict deterministic encoding rules.json
is a plain JSON format, which also won't encode CIDs and will encode bytes in the Go-specific padded base64 string format rather than the dag-json method of byte encoding. See https://ipld.io/specs/codecs/ for more information on IPLD codecs. protobuf
is no longer used as the codec name fordag-pb
- The codec name
raw
is used to mean Bytes in the IPLD Data Model
- UnixFS refactor. The dag-pb codec, which is used to encode UnixFS data for IPFS, is now represented through the
dag
API in a form that mirrors the protobuf schema used to define the binary format. This unifies the implementations and specification of dag-pb across the IPLD and IPFS stacks. Previously, additional layers of code for file and directory handling within IPFS between protobuf serialization and UnixFS obscured the protobuf representation. Much of this code has now been replaced and there are fewer layers of transformation. This means that interacting with dag-pb data via thedag
API will use different forms:- Previously, using
dag get
on a dag-pb block would present the block serialized as JSON as{"data":"padded-base64-bytes","links":[{"Name":"foo","Size":100,"Cid":{"/":"Qm..."}},...]}
. - Now, the dag-pb data with dag-json codec for output will be serialized using the data model from the dag-pb specification:
{"Data":{"/":{"bytes":"unpadded-base64-bytes"}},"Links":[{"Name":"foo","Tsize":100,"Hash":{"/":"Qm..."}},...]}
. Aside from the change in byte formatting, most field names have changed:data
→Data
,links
→Links
,Size
→Tsize
,Cid
→Hash
. Note that this output can be changed now using theoutput-codec
option to specify an alternative codec. - Similarly, using
dag put
and astore-codec
option ofdag-pb
now requires that the input conform to this dag-pb specified form. Previously, input using{"data":"...","links":[...]}
was accepted, now it must be{"Data":"...","Links":[...]}
. - Previously it was not possible to use paths to navigate to any of these properties of a dag-pb node, the only possible paths were named links, e.g.
dag get QmFoo/NamedLink
whereNamedLink
was one of the links whose name wasNamedLink
. This functionality remains the same, but by prefixing the path with/ipld/
we enter data model pathing semantics and candag get /ipld/QmFoo/Links/0/Hash
to navigate to links or/ipld/QmFoo/Data
to simply retrieve the data section of the node, for example. - ℹ See the dag-pb specification for details on the codec and its data model representation.
- ℹ See this detailed write-up for further background on these changes.
- Previously, using
Ⓜ Multibase Command
go-ipfs now provides utility commands for working with multibase:
$ echo -n hello | ipfs multibase encode -b base16 > file-mbase16
$ cat file-mbase16
f68656c6c6f
$ ipfs multibase decode file-mbase16
hello
$ cat file-mbase16 | ipfs multibase decode
hello
$ ipfs multibase transcode -b base2 file-mbase16
00110100001100101011011000110110001101111
See ipfs multibase --help
for more examples.
🔨 Bitswap now supports greater configurability
This release adds an Internal
section to the configuration file that is designed to help advanced users optimize their setups without needing a custom binary. The Internal
section is not guaranteed to be the same from release to release and may not be covered by migrations. If you use the Internal
section you should be making sure to check the config documentation between releases for any changes.
🐚 Programmatic shell completions command
ipfs commands completion bash
will generate a bash completion script for go-ipfs commands
📜 Profile collection command
Performance profiles can now be collected using ipfs diag profile
. If you need to do some debugging or have an issue to submit the collected profiles are very useful to have around.
🍎 Mac OS notarized binaries
The go-ipfs and related migration binaries (for both Intel and Apple Sillicon) are now signed and notarized to make Mac OS installation easier.
👨👩👦 Improved MDNS
There is a completed implementation of the revised libp2p MDNS spec. This should result in better MDNS discovery and better local/offline operation as a result.
🚗 CAR import statistics
dag import
command now supports --stats
option which will include the number of imported blocks and their total size in the output.
🕸 Peering command
This release adds swarm peering
command for easy management of the peering subsystem. Peer in the peering subsystem is maintained to be connected at all times, and gets reconnected on disconnect with a back-off.
See ipfs swarm peering --help
for more details.
✅ Release Checklist
For each RC published in each stage:
- version string in
version.go
has been updated (in therelease-vX.Y.Z
branch). - tag commit with
vX.Y.Z-rcN
- upload to dist.ipfs.io
- Build: https://github.com/ipfs/distributions#usage.
- Pin the resulting release.
- Make a PR against ipfs/distributions with the updated versions, including the new hash in the PR comment.
- Ask the infra team to update the DNSLink record for dist.ipfs.io to point to the new distribution.
- cut a pre-release on github and upload the result of the ipfs/distributions build in the previous step.
- Announce the RC:
- On IRC/Matrix (both #ipfs and #ipfs-dev)
- To the early testers listed in docs/EARLY_TESTERS.md.
Checklist:
- Stage 0 - Automated Testing
- Fork a new branch (
release-vX.Y.Z
) frommaster
and make any further release related changes to this branch. If any "non-trivial" changes (see the footnotes of docs/releases.md for a definition) get added to the release, uncheck all the checkboxes and return to this stage.- Follow the RC release process to cut the first RC.
- Bump the version in
version.go
in themaster
branch tovX.(Y+1).0-dev
.
- Automated Testing (already tested in CI) - Ensure that all tests are passing, this includes:
- unit, sharness, cross-build, etc (
make test
) - lint (
make test_go_lint
) - interop
- go-ipfs-api
- go-ipfs-http-client
- WebUI
- unit, sharness, cross-build, etc (
- Fork a new branch (
- Stage 1 - Internal Testing
- CHANGELOG.md has been updated
- use
./bin/mkreleaselog
to generate a nice starter list
- use
- Infrastructure Testing:
- Deploy new version to a subset of Bootstrappers
- Deploy new version to a subset of Gateways
- Deploy new version to a subset of Preload nodes
- Collect metrics every day. Work with the Infrastructure team to learn of any hiccup
- IPFS Application Testing - Run the tests of the following applications:
- CHANGELOG.md has been updated
- Stage 2 - Community Dev Testing
- Reach out to the IPFS early testers listed in docs/EARLY_TESTERS.md for testing this release (check when no more problems have been reported). If you'd like to be added to this list, please file a PR.
- Reach out to on IRC for beta testers.
- Run tests available in the following repos with the latest beta (check when all tests pass):
- Stage 3 - Community Prod Testing
- Documentation
- Ensure that CHANGELOG.md is up to date
- Ensure that README.md is up to date
- Ensure that all the examples we have produced for go-ipfs run without problems
- Update HTTP-API Documentation on the Website using https://github.com/ipfs/http-api-docs
- Update CLI Documentation on the Website using https://github.com/ipfs-inactive/docs/blob/master/scripts/cli.sh
- Invite the wider community through (link to the release issue):
- Matrix
- Discuss
- Documentation
- Stage 4 - Release
- Final preparation
- Verify that version string in
version.go
has been updated. - Merge
release-vX.Y.Z
into therelease
branch. - Tag this merge commit (on the
release
branch) withvX.Y.Z
. - Release published
- to dist.ipfs.io
- to npm-go-ipfs
- to chocolatey
- to snap
- to github
- to arch (flag it out of date)
- Cut a new ipfs-desktop release
- Verify that version string in
- Publish a Release Blog post (at minimum, a c&p of this release issue with all the highlights, API changes, link to changelog and thank yous)
- Broadcasting (link to blog post)
- Matrix
- discuss.ipfs.io
-
Announce it on the IPFS Users Mailing List
- Final preparation
- Post-Release
- Merge the
release
branch back intomaster
, ignoring the changes toversion.go
(keep the-dev
version from master). - Create an issue using this release issue template for the next release.
- Make sure any last-minute changelog updates from the blog post make it back into the CHANGELOG.
- Merge the
⁉️ Do you have questions?
The best place to ask your questions about IPFS, how it works and what you can do with it is at discuss.ipfs.io. We are also available at the #lobby:ipfs.io
Matrix channel which is bridged with other chat platforms.