1
0
mirror of https://github.com/ipfs/kubo.git synced 2025-05-17 15:06:47 +08:00

Merge pull request #4835 from ipfs/fix/typos

misc: Fix a few typos
This commit is contained in:
Whyrusleeping
2018-04-20 23:35:58 +09:00
committed by GitHub
39 changed files with 75 additions and 76 deletions

View File

@ -130,7 +130,7 @@ remove them before updating.
- Update badgerds to fix i386 windows build ([ipfs/go-ipfs#4464](https://github.com/ipfs/go-ipfs/pull/4464))
- Only construct bitswap event loggable if necessary ([ipfs/go-ipfs#4533](https://github.com/ipfs/go-ipfs/pull/4533))
- Ensure that flush on the mfs root flushes its directory ([ipfs/go-ipfs#4509](https://github.com/ipfs/go-ipfs/pull/4509))
- Fix defered unlock of pin lock in AddR ([ipfs/go-ipfs#4562](https://github.com/ipfs/go-ipfs/pull/4562))
- Fix deferred unlock of pin lock in AddR ([ipfs/go-ipfs#4562](https://github.com/ipfs/go-ipfs/pull/4562))
- Fix iOS builds ([ipfs/go-ipfs#4610](https://github.com/ipfs/go-ipfs/pull/4610))
- Calling repo gc now frees up space with badgerds ([ipfs/go-ipfs#4578](https://github.com/ipfs/go-ipfs/pull/4578))
- Fix leak in bitswap sessions shutdown ([ipfs/go-ipfs#4658](https://github.com/ipfs/go-ipfs/pull/4658))
@ -282,7 +282,7 @@ wantlist updates to every connected bitswap partner, as well as searching the
DHT for providers less frequently. In future releases we will migrate over more
ipfs commands to take advantage of bitswap sessions. As nodes update to this
and future versions, expect to see idle bandwidth usage on the ipfs network
go down noticably.
go down noticeably.
The never ending effort to reduce resource consumption had a few important
updates this release. First, the bitswap sessions changes discussed above will
@ -332,7 +332,7 @@ You can read more on this topic in [Plugin docs](docs/plugins.md)
In order to simplify its integration with fs-repo-migrations, we've switched
the ipfs/go-ipfs docker image from a musl base to a glibc base. For most users
this will not be noticable, but if you've been building your own images based
this will not be noticeable, but if you've been building your own images based
off this image, you'll have to update your dockerfile. We recommend a
multi-stage dockerfile, where the build stage is based off of a regular Debian or
other glibc-based image, and the assembly stage is based off of the ipfs/go-ipfs
@ -1010,14 +1010,14 @@ This is the first Release Candidate. Unless there are vulnerabilities or regress
### 0.4.2 - 2016-05-17
This is a patch release which fixes perfomance and networking bugs in go-libp2p,
This is a patch release which fixes performance and networking bugs in go-libp2p,
You should see improvements in CPU and RAM usage, as well as speed of object lookups.
There are also a few other nice improvements.
* Notable Fixes
* Set a deadline for dialing attempts. This prevents a node from accumulating
failed connections. (@whyrusleeping)
* Avoid unneccessary string/byte conversions in go-multihash. (@whyrusleeping)
* Avoid unnecessary string/byte conversions in go-multihash. (@whyrusleeping)
* Fix a deadlock around the yamux stream muxer. (@whyrusleeping)
* Fix a bug that left channels open, causing hangs. (@whyrusleeping)
* Fix a bug around yamux which caused connection hangs. (@whyrusleeping)
@ -1077,7 +1077,7 @@ insignificant) features. The primary reason for this release is the listener
hang bugfix that was shipped in the 0.4.0 release.
* Features
* implementated ipfs object diff (@whyrusleeping)
* implemented ipfs object diff (@whyrusleeping)
* allow promises (used in get, refs) to fail (@whyrusleeping)
* Tool changes
@ -1166,7 +1166,7 @@ on the networking layer.
* General
* Add support for HTTP OPTIONS requests. (@lidel)
* Add `ipfs diag cmds` to view active API requests (@whyrusleeping)
* Add an `IPFS_LOW_MEM` environment veriable which relaxes Bitswap's memory usage. (@whyrusleeping)
* Add an `IPFS_LOW_MEM` environment variable which relaxes Bitswap's memory usage. (@whyrusleeping)
* The Docker image now lives at `ipfs/go-ipfs` and has been completely reworked. (@lgierth)
* Security fixes
* The gateway path prefix added in v0.3.10 was vulnerable to cross-site
@ -1351,7 +1351,7 @@ in the future, it will be enabled by default.
This patch update includes a good number of bugfixes, notably, it fixes
builds on windows, and puts newlines between streaming json objects for a
proper nsjon format.
proper ndjson format.
* Features
* Writable gateway enabled again (@cryptix)
@ -1359,7 +1359,7 @@ proper nsjon format.
* Bugfixes
* fix windows builds (@whyrusleeping)
* content type on command responses default to text (@whyrusleeping)
* add check to makefile to ensure windows builds dont fail silently (@whyrusleeping)
* add check to makefile to ensure windows builds don't fail silently (@whyrusleeping)
* put newlines between streaming json output objects (@whyrusleeping)
* fix streaming output to flush per write (@whyrusleeping)
* purposely fail builds pre go1.5 (@whyrusleeping)
@ -1603,13 +1603,13 @@ This patch update fixes various issues, in particular:
* ipns resolution timeout bug fix by @whyrusleeping
* new cluster tests with iptb by @whyrusleeping
* fix log callstack printing bug by @whyrusleeping
* document bash completiong by @dylanPowers
* document bash completion by @dylanPowers
### 0.3.2 - 2015-04-22
This patch update implements multicast dns as well as fxing a few test issues.
* implment mdns peer discovery @whyrusleeping
* implement mdns peer discovery @whyrusleeping
* fix mounting issues in sharness tests @chriscool
### 0.3.1 - 2015-04-21

View File

@ -317,7 +317,7 @@ Please direct general questions and help requests to our
[forum](https://discuss.ipfs.io) or our IRC channel (freenode #ipfs).
If you believe you've found a bug, check the [issues list](https://github.com/ipfs/go-ipfs/issues)
and, if you dont see your problem there, either come talk to us on IRC (freenode #ipfs) or
and, if you don't see your problem there, either come talk to us on IRC (freenode #ipfs) or
file an issue of your own!
## Contributing

View File

@ -9,7 +9,7 @@ have_binary() {
type "$1" > /dev/null 2> /dev/null
}
check_writeable() {
check_writable() {
printf "" > "$1" && rm "$1"
}
@ -39,7 +39,7 @@ download() {
test "$#" -eq "2" || die "download requires exactly two arguments, was given $@"
if ! check_writeable "$dl_output"; then
if ! check_writable "$dl_output"; then
die "download error: cannot write to $dl_output"
fi
@ -65,7 +65,7 @@ unarchive() {
fi
ua_outfile="$ua_outfile$ua_binpostfix"
if ! check_writeable "$ua_outfile"; then
if ! check_writable "$ua_outfile"; then
die "unarchive error: cannot write to $ua_outfile"
fi

View File

@ -131,7 +131,7 @@ func doInit(out io.Writer, repoRoot string, empty bool, nBitsForKeypair int, con
return err
}
if err := checkWriteable(repoRoot); err != nil {
if err := checkWritable(repoRoot); err != nil {
return err
}
@ -171,7 +171,7 @@ func doInit(out io.Writer, repoRoot string, empty bool, nBitsForKeypair int, con
return initializeIpnsKeyspace(repoRoot)
}
func checkWriteable(dir string) error {
func checkWritable(dir string) error {
_, err := os.Stat(dir)
if err == nil {
// dir exists, make sure we can write to it
@ -188,7 +188,7 @@ func checkWriteable(dir string) error {
}
if os.IsNotExist(err) {
// dir doesnt exist, check that we can create it
// dir doesn't exist, check that we can create it
return os.Mkdir(dir, 0775)
}

View File

@ -101,7 +101,7 @@ func mainRet() int {
}
}
// output depends on excecutable name passed in os.Args
// output depends on executable name passed in os.Args
// so we need to make sure it's stable
os.Args[0] = "ipfs"
@ -235,7 +235,7 @@ func commandDetails(path []string, root *cmds.Command) (*cmdDetails, error) {
return &details, nil
}
// commandShouldRunOnDaemon determines, from commmand details, whether a
// commandShouldRunOnDaemon determines, from command details, whether a
// command ought to be executed on an ipfs daemon.
//
// It returns a client if the command should be executed on a daemon and nil if
@ -257,7 +257,7 @@ func commandShouldRunOnDaemon(details cmdDetails, req *cmds.Request, root *cmds.
}
// at this point need to know whether api is running. we defer
// to this point so that we dont check unnecessarily
// to this point so that we don't check unnecessarily
// did user specify an api to use for this command?
apiAddrStr, _ := req.Options[coreCmds.ApiOption].(string)

View File

@ -9,7 +9,7 @@ import (
func init() {
supportsFDManagement = true
getLimit = freebsdGetLimit
setLimit = freebdsSetLimit
setLimit = freebsdSetLimit
}
func freebsdGetLimit() (int64, int64, error) {
@ -18,7 +18,7 @@ func freebsdGetLimit() (int64, int64, error) {
return rlimit.Cur, rlimit.Max, err
}
func freebdsSetLimit(soft int64, max int64) error {
func freebsdSetLimit(soft int64, max int64) error {
rlimit := unix.Rlimit{
Cur: soft,
Max: max,

View File

@ -29,7 +29,7 @@ import (
var verbose = false
// Usage prints out the usage of this module.
// Assumes flags use go stdlib flag pacakage.
// Assumes flags use go stdlib flag package.
var Usage = func() {
text := `seccat - secure netcat in Go

View File

@ -26,8 +26,8 @@ type Context struct {
ConstructNode func() (*core.IpfsNode, error)
}
// GetConfig returns the config of the current Command exection
// context. It may load it with the providied function.
// GetConfig returns the config of the current Command execution
// context. It may load it with the provided function.
func (c *Context) GetConfig() (*config.Config, error) {
var err error
if c.config == nil {
@ -39,7 +39,7 @@ func (c *Context) GetConfig() (*config.Config, error) {
return c.config, err
}
// GetNode returns the node of the current Command exection
// GetNode returns the node of the current Command execution
// context. It may construct it with the provided function.
func (c *Context) GetNode() (*core.IpfsNode, error) {
var err error

View File

@ -29,7 +29,7 @@ Please look and conform to our [Go Contribution Guidelines](https://github.com/i
All commits in a PR must pass tests. If they don't, fix the commits and/or [squash them](https://git-scm.com/book/en/v2/Git-Tools-Rewriting-History#Squashing-Commits) so that they do pass the tests. This should be done so that we can use git-bisect easily.
We use CI tests which run when you push to your branch. To run the tests locally, you can run any of these: `make build`, `make install`, `make test`, `go test ./...`, depending on what youre looking to do. Generally `go test ./...` is your best bet.
We use CI tests which run when you push to your branch. To run the tests locally, you can run any of these: `make build`, `make install`, `make test`, `go test ./...`, depending on what you're looking to do. Generally `go test ./...` is your best bet.
### Branch Names

View File

@ -229,7 +229,7 @@ func setupNode(ctx context.Context, n *IpfsNode, cfg *BuildCfg) error {
internalDag := dag.NewDAGService(bserv.New(n.Blockstore, offline.Exchange(n.Blockstore)))
n.Pinning, err = pin.LoadPinner(n.Repo.Datastore(), n.DAG, internalDag)
if err != nil {
// TODO: we should move towards only running 'NewPinner' explicity on
// TODO: we should move towards only running 'NewPinner' explicitly on
// node init instead of implicitly here as a result of the pinner keys
// not being found in the datastore.
// this is kinda sketchy and could cause data loss

View File

@ -24,7 +24,7 @@ import (
cmds "gx/ipfs/QmfAkMSt9Fwzk48QDJecPcwCUjnf2uG7MLnmCGTp4C6ouL/go-ipfs-cmds"
)
// ErrDepthLimitExceeded indicates that the max depth has been exceded.
// ErrDepthLimitExceeded indicates that the max depth has been exceeded.
var ErrDepthLimitExceeded = fmt.Errorf("depth limit exceeded")
const (

View File

@ -271,7 +271,7 @@ var blockRmCmd = &cmds.Command{
Tagline: "Remove IPFS block(s).",
ShortDescription: `
'ipfs block rm' is a plumbing command for removing raw ipfs blocks.
It takes a list of base58 encoded multihashs to remove.
It takes a list of base58 encoded multihashes to remove.
`,
},
Arguments: []cmdkit.Argument{

View File

@ -185,7 +185,7 @@ var DagGetCmd = &cmds.Command{
Helptext: cmdkit.HelpText{
Tagline: "Get a dag node from ipfs.",
ShortDescription: `
'ipfs dag get' fetches a dag node from ipfs and prints it out in the specifed
'ipfs dag get' fetches a dag node from ipfs and prints it out in the specified
format.
`,
},

View File

@ -27,7 +27,7 @@ root will not be listable, as it is virtual. Access known paths directly.
You may have to create /ipfs and /ipns before using 'ipfs mount':
> sudo mkdir /ipfs /ipns
> sudo chown ` + "`" + `whoami` + "`" + ` /ipfs /ipns
> sudo chown $(whoami) /ipfs /ipns
> ipfs daemon &
> ipfs mount
`,
@ -40,7 +40,7 @@ root will not be listable, as it is virtual. Access known paths directly.
You may have to create /ipfs and /ipns before using 'ipfs mount':
> sudo mkdir /ipfs /ipns
> sudo chown ` + "`" + `whoami` + "`" + ` /ipfs /ipns
> sudo chown $(whoami) /ipfs /ipns
> ipfs daemon &
> ipfs mount

View File

@ -98,7 +98,7 @@ NOTE: List all references recursively by using the flag '-r'.
}
if edges {
if format != "<dst>" {
res.SetError(errors.New("using format arguement with edges is not allowed"),
res.SetError(errors.New("using format argument with edges is not allowed"),
cmdkit.ErrClient)
return
}

View File

@ -127,7 +127,7 @@ func ParsePath(p string) (coreiface.Path, error) {
return &path{path: pp}, nil
}
// ParseCid parses the path from `c`, retruns the parsed path.
// ParseCid parses the path from `c`, returns the parsed path.
func ParseCid(c *cid.Cid) coreiface.Path {
return &path{path: ipfspath.FromCid(c), cid: c, root: c}
}

View File

@ -95,9 +95,9 @@ func (pinType) Indirect() PinLsOption {
// Recursive is an option for Pin.Add which specifies whether to pin an entire
// object tree or just one object. Default: true
func (pinOpts) Recursive(recucsive bool) PinAddOption {
func (pinOpts) Recursive(recursive bool) PinAddOption {
return func(settings *PinAddSettings) error {
settings.Recursive = recucsive
settings.Recursive = recursive
return nil
}
}

View File

@ -59,11 +59,11 @@ func ParseInputs(ienc, format string, r io.Reader, mhType uint64, mhLen int) ([]
}
// AddParser adds DagParser under give input encoding and format
func (iep InputEncParsers) AddParser(ienv, format string, f DagParser) {
m, ok := iep[ienv]
func (iep InputEncParsers) AddParser(ienc, format string, f DagParser) {
m, ok := iep[ienc]
if !ok {
m = make(FormatParsers)
iep[ienv] = m
iep[ienc] = m
}
m[format] = f

View File

@ -4,7 +4,7 @@ IPFS block services.
IPFS nodes will keep local copies of any object that have either been
added or requested locally. Not all of these objects are worth
preserving forever though, so the node adminstrator can pin objects
preserving forever though, so the node administrator can pin objects
they want to keep and unpin objects that they don't care about.
Garbage collection sweeps iterate through the local block store

View File

@ -28,7 +28,7 @@ func Resolve(ctx context.Context, nsys namesys.NameSystem, r *resolver.Resolver,
defer evt.Done()
// resolve ipns paths
// TODO(cryptix): we sould be able to query the local cache for the path
// TODO(cryptix): we should be able to query the local cache for the path
if nsys == nil {
evt.Append(logging.LoggableMap{"error": ErrNoNamesys.Error()})
return nil, ErrNoNamesys

View File

@ -47,7 +47,7 @@ run your daemon with the `--enable-pubsub-experiment` flag. Then use the
---
## Client mode DHT routing
Allows the dht to be run in a mode that doesnt serve requests to the network,
Allows the dht to be run in a mode that doesn't serve requests to the network,
saving bandwidth.
### State
@ -183,7 +183,7 @@ and save it to `~/.ipfs/swarm.key` (If you are using a custom `$IPFS_PATH`, put
it in there instead).
When using this feature, you will not be able to connect to the default bootstrap
nodes (Since we arent part of your private network) so you will need to set up
nodes (Since we aren't part of your private network) so you will need to set up
your own bootstrap nodes.
First, to prevent your node from even trying to connect to the default bootstrap nodes, run:
@ -215,7 +215,7 @@ configured, the daemon will fail to start.
---
## ipfs p2p
Allows to tunnel TCP connections through Libp2p sterams
Allows to tunnel TCP connections through Libp2p streams
### State
Experimental
@ -277,7 +277,7 @@ In order to connect peers QmA and QmB through a relay node QmRelay:
- Both peers should connect to the relay:
`ipfs swarm connect /transport/address/ipfs/QmRelay`
- Peer QmA can then connect to peer QmB using the relay:
`ipfs swarm connect /ipfs/QmRelay/p2p-cricuit/ipfs/QmB`
`ipfs swarm connect /ipfs/QmRelay/p2p-circuit/ipfs/QmB`
Peers can also connect with an unspecific relay address, which will
try to dial through known relays:

View File

@ -53,7 +53,7 @@ addresses (like the example below), then your nodes are online.
}
```
Next, check to see if the nodes have a connection to eachother. You can do this
Next, check to see if the nodes have a connection to each other. You can do this
by running `ipfs swarm peers` on one node, and checking for the other nodes
peer ID in the output. If the two nodes *are* connected, and the `ipfs get`
command is still hanging, then something unexpected is going on, and I
@ -84,7 +84,7 @@ knowing that it needs to, the likely culprit is a bad NAT. When node B learns
that it needs to connect to node A, it checks the DHT for addresses for node A,
and then starts trying to connect to them. We can check those addresses by
running `ipfs dht findpeer <node A peerID>` on node B. This command should
return a list of addresses for node A. If it doesnt return any addresses, then
return a list of addresses for node A. If it doesn't return any addresses, then
you should try running the manual providing command from the previous steps.
Example output of addresses might look something like this:
@ -98,7 +98,7 @@ In this case, we can see a localhost (127.0.0.1) address, a LAN address (the
192.168.*.* one) and another address. If this third address matches your
external IP, then the network knows a valid external address for your node. At
this point, its safe to assume that your node has a difficult to traverse NAT
situation. If this is the case, you can try to enable upnp or NAT-PMP on the
situation. If this is the case, you can try to enable UPnP or NAT-PMP on the
router of node A and retry the process. Otherwise, you can try manually
connecting node A to node B.

View File

@ -3,7 +3,7 @@
## Release Schedule
go-ipfs is on a six week release schedule. Following a release, there will be
five weeks for code of any type (features, bugfixes, etc) to be added. After
the five weeks is up, a release canidate is tagged and only important bugfixes
the five weeks is up, a release candidate is tagged and only important bugfixes
will be allowed up to release day.
## Release Candidate Checklist
@ -15,8 +15,8 @@ will be allowed up to release day.
- you will have to manually adjust the gx version to 'rc'
## Pre-Release Checklist
- [ ] before release, tag 'release canidate' for users to test against
- if bugs are found/fixed, do another release canidate
- [ ] before release, tag 'release candidate' for users to test against
- if bugs are found/fixed, do another release candidate
- [ ] all tests pass (no exceptions)
- [ ] run interop tests https://github.com/ipfs/interop#test-with-a-non-yet-released-version-of-go-ipfs
- [ ] webui works (for most definitions of 'works') - Test the multiple pages and verify that no visible errors are shown.

View File

@ -4,7 +4,7 @@
Bitswap is the data trading module for ipfs, it manages requesting and sending
blocks to and from other peers in the network. Bitswap has two main jobs, the
first is to acquire blocks requested by the client from the network. The second
is to judiciously send blocks in its posession to other peers who want them.
is to judiciously send blocks in its possession to other peers who want them.
Bitswap is a message based protocol, as opposed to response-reply. All messages
contain wantlists, or blocks. Upon receiving a wantlist, a node should consider
@ -20,7 +20,7 @@ another peer has a task in the peer request queue created for it. The peer
request queue is a priority queue that sorts available tasks by some metric,
currently, that metric is very simple and aims to fairly address the tasks
of each other peer. More advanced decision logic will be implemented in the
future. Task workers pull tasks to be done off of the queue, retreive the block
future. Task workers pull tasks to be done off of the queue, retrieve the block
to be sent, and send it off. The number of task workers is limited by a constant
factor.

View File

@ -295,7 +295,7 @@ func (bs *Bitswap) CancelWants(cids []*cid.Cid, ses uint64) {
bs.wm.CancelWants(context.Background(), cids, nil, ses)
}
// HasBlock announces the existance of a block to this bitswap service. The
// HasBlock announces the existence of a block to this bitswap service. The
// service will potentially notify its peers.
func (bs *Bitswap) HasBlock(blk blocks.Block) error {
return bs.receiveBlockFrom(blk, "")

View File

@ -24,7 +24,7 @@ type ledger struct {
// Partner is the remote Peer.
Partner peer.ID
// Accounting tracks bytes sent and recieved.
// Accounting tracks bytes sent and received.
Accounting debtRatio
// lastExchange is the time of the last data exchange.

View File

@ -219,7 +219,7 @@ func (db *DagBuilderHelper) Maxlinks() int {
return db.maxlinks
}
// Close has the DAGServce perform a batch Commit operation.
// Close has the DAGService perform a batch Commit operation.
// It should be called at the end of the building process to make
// sure all data is persisted.
func (db *DagBuilderHelper) Close() error {

View File

@ -4,10 +4,10 @@
// as additional links.
//
// Each layer is a trickle sub-tree and is limited by an increasing
// maxinum depth. Thus, the nodes first layer
// maximum depth. Thus, the nodes first layer
// can only hold leaves (depth 1) but subsequent layers can grow deeper.
// By default, this module places 4 nodes per layer (that is, 4 subtrees
// of the same maxinum depth before increasing it).
// of the same maximum depth before increasing it).
//
// Trickle DAGs are very good for sequentially reading data, as the
// first data leaves are directly reachable from the root and those

View File

@ -14,8 +14,7 @@ type RawNode struct {
blocks.Block
}
// NewRawNode creates a RawNode using the default sha2-256 hash
// funcition.
// NewRawNode creates a RawNode using the default sha2-256 hash function.
func NewRawNode(data []byte) *RawNode {
h := u.Hash(data)
c := cid.NewCidV1(cid.Raw, h)

View File

@ -315,7 +315,7 @@ func (r *PubsubResolver) handleSubscription(sub *floodsub.Subscription, name str
err = r.receive(msg, name, pubk)
if err != nil {
log.Warningf("PubsubResolve: error proessing update for %s: %s", name, err.Error())
log.Warningf("PubsubResolve: error processing update for %s: %s", name, err.Error())
}
}
}
@ -369,7 +369,7 @@ func (r *PubsubResolver) receive(msg *floodsub.Message, name string, pubk ci.Pub
}
// rendezvous with peers in the name topic through provider records
// Note: rendezbous/boostrap should really be handled by the pubsub implementation itself!
// Note: rendezvous/boostrap should really be handled by the pubsub implementation itself!
func bootstrapPubsub(ctx context.Context, cr routing.ContentRouting, host p2phost.Host, name string) {
topic := "floodsub:" + name
hash := u.Hash([]byte(topic))

View File

@ -101,7 +101,7 @@ func StringToMode(s string) (Mode, bool) {
// A Pinner provides the necessary methods to keep track of Nodes which are
// to be kept locally, according to a pin mode. In practice, a Pinner is in
// in charge of keeping the list of items from the local storage that should
// not be garbaged-collected.
// not be garbage-collected.
type Pinner interface {
// IsPinned returns whether or not the given cid is pinned
// and an explanation of why its pinned

View File

@ -188,7 +188,7 @@ func writeHdr(n *merkledag.ProtoNode, hdr *pb.Set) error {
return err
}
// make enough space for the length prefix and the marshalled header data
// make enough space for the length prefix and the marshaled header data
data := make([]byte, binary.MaxVarintLen64, binary.MaxVarintLen64+len(hdrData))
// write the uvarint length of the header data

View File

@ -3,7 +3,7 @@ package plugin
// Plugin is base interface for all kinds of go-ipfs plugins
// It will be included in interfaces of different Plugins
type Plugin interface {
// Name should return uniqe name of the plugin
// Name should return unique name of the plugin
Name() string
// Version returns current version of the plugin
Version() string

View File

@ -476,7 +476,7 @@ func (r *FSRepo) Config() (*config.Config, error) {
// It is not necessary to hold the package lock since the repo is in an
// opened state. The package lock is _not_ meant to ensure that the repo is
// thread-safe. The package lock is only meant to guard againt removal and
// thread-safe. The package lock is only meant to guard against removal and
// coordinate the lockfile. However, we provide thread-safety to keep
// things simple.
packageLock.Lock()

View File

@ -37,7 +37,7 @@ func TestVersion(t *testing.T) {
assert.Nil(rp.WriteVersion(fsrepoV), t, "Trouble writing version")
assert.Nil(rp.CheckVersion(fsrepoV), t, "Trouble checking the verion")
assert.Nil(rp.CheckVersion(fsrepoV), t, "Trouble checking the version")
assert.Err(rp.CheckVersion(1), t, "Should throw an error for the wrong version.")
}

View File

@ -259,7 +259,7 @@ func osWithVariant() (string, error) {
// - on standard ubuntu: stdout
// - on alpine: stderr (it probably doesn't know the --version flag)
//
// we supress non-zero exit codes (see last point about alpine).
// we suppress non-zero exit codes (see last point about alpine).
out, err := exec.Command("sh", "-c", "ldd --version || true").CombinedOutput()
if err != nil {
return "", err

View File

@ -80,7 +80,7 @@ test_expect_success ".ipfs/ has been created" '
The `|| ...` is a diagnostic run when the preceding command fails.
test_fsh is a shell function that echoes the args, runs the cmd,
and then also fails, making sure the test case fails. (wouldnt want
and then also fails, making sure the test case fails. (wouldn't want
the diagnostic accidentally returning true and making it _seem_ like
the test case succeeded!).

View File

@ -14,12 +14,12 @@ func Writable(path string) error {
if err := os.MkdirAll(path, os.ModePerm); err != nil {
return err
}
// Check the directory is writeable
if f, err := os.Create(filepath.Join(path, "._check_writeable")); err == nil {
// Check the directory is writable
if f, err := os.Create(filepath.Join(path, "._check_writable")); err == nil {
f.Close()
os.Remove(f.Name())
} else {
return errors.New("'" + path + "' is not writeable")
return errors.New("'" + path + "' is not writable")
}
return nil
}

View File

@ -52,7 +52,7 @@ type DagModifier struct {
}
// NewDagModifier returns a new DagModifier, the Cid prefix for newly
// created nodes will be inherted from the passed in node. If the Cid
// created nodes will be inhered from the passed in node. If the Cid
// version if not 0 raw leaves will also be enabled. The Prefix and
// RawLeaves options can be overridden by changing them after the call.
func NewDagModifier(ctx context.Context, from ipld.Node, serv ipld.DAGService, spl chunker.SplitterGen) (*DagModifier, error) {
@ -82,7 +82,7 @@ func NewDagModifier(ctx context.Context, from ipld.Node, serv ipld.DAGService, s
// WriteAt will modify a dag file in place
func (dm *DagModifier) WriteAt(b []byte, offset int64) (int, error) {
// TODO: this is currently VERY inneficient
// TODO: this is currently VERY inefficient
// each write that happens at an offset other than the current one causes a
// flush to disk, and dag rewrite
if offset == int64(dm.writeStart) && dm.wrBuf != nil {