Instead of assuming the command is the daemon command and closing
the node, which resulted in bugs like #1053, we cancel the context
and let the context children detect the cancellation and gracefully
clean up after themselves.
The shutdown logging has been moved into the daemon command, where
it makes more sense, so that commands like ping will not print out
the same output on cancellation.
When the response includes the X-Chunked-Output header, we treat that
as channel output, and fire up a goroutine to decode the chunks. This
routine need to look for context cancellation so that it can exit
cleanly.
The context may be cancelled while a request is in flight. We need to
handle this and cancel the request. The code is based on the ideas
from https://blog.golang.org/context
If a command invocation such as 'daemon' is interrupted, the interrupt
handler asks the node to close. The closing of the node will result in
the command invocation finishing, and possibly returning from main()
before the interrupt handler is done. In particular, the info logging
that a graceful shutdown was completed may never reach reach stdout.
As the whole point of logging "Gracefully shut down." is to give
confidence when debugging that the shutdown was clean, this is
slightly unfortunate.
The interrupt handler needs to be set up in main() instead of Run(),
so that we can defer the closing of the interrupt handler until just
before returning from main, not when Run() returns with a streaming
result reader.
Once the server is asked to shut down, we stop accepting new
connections, but the 'manners' graceful shutdown will wait for
all existing connections closed to close before finishing.
For keep-alive connections this will never happen unless the
client detects that the server is shutting down through the
ipfs API itself, and closes the connection in response.
This is a problem e.g. with the webui's connections visualization,
which polls the swarm/peers endpoint once a second, and never
detects that the API server was shut down.
We can mitigate this by telling the server to disable keep-alive,
which will add a 'Connection: close' header to the next HTTP
response on the connection. A well behaving client should then
treat that correspondingly by closing the connection.
Unfortunately this doesn't happen immediately in all cases,
presumably depending on the keep-alive timeout of the browser
that set up the connection, but it's at least a step in the
right direction.
When closing a node, the node itself only takes care of tearing down
its own children. As corehttp sets up a server based on a node, it
needs to also ensure that the server is accounted for when determining
if the node has been fully closed.
The server may stay alive for quite a while due to waiting on
open connections to close before shutting down. We should
find ways to terminate these connections in a more controlled
manner, but in the meantime it's helpful to be able to see
why a shutdown of the ipfs daemon is taking so long.
This reverts commit f74e71f96545b14f5293c9851ce624aecddb68aa.
The 'Online' flag of the command context does not seem to be set in
any code paths, at least not when running commands such as 'ipfs daemon'
or 'ipfs ping'. The result after f74e71f9 is that we never shutdown
cleanly, as we'll always os.Exit(0) from the interrupt handler.
The os.Exit(0) itself is also dubious, as conceptually the interrupt
handler should ask whatever is stalling to stop stalling, so that
main() can return like normal. Exiting with -1 in error cases where
the interrupt handler is unable to stop the stall is fine, but the
normal case of interrupting cleanly should exit through main().
This changes .go-ipfs to .ipfs everywhere.
And by the way this defines a DefaultPathName const
for this name.
License: MIT
Signed-off-by: Christian Couder <chriscool@tuxfamily.org>
WARNING: No migration performed! That needs to come in a separate
commit, perhaps amended into this one.
Migration must move keyspace "/b" from leveldb to the flatfs subdir,
while removing the "b" prefix (keys should start with just "/").
We now consider debugerrors harmful: we've run into cases where
debugerror.Wrap() hid valuable error information (err == io.EOF?).
I've removed them from the main code, but left them in some tests.
Go errors are lacking, but unfortunately, this isn't the solution.
It is possible that debugerros.New or debugerrors.Errorf should
remain still (i.e. only remove debugerrors.Wrap) but we don't use
these errors often enough to keep.
This commit adds a new set of sharness tests for pinning, and addresses
bugs that were pointed out by said tests.
test/sharness: added more pinning tests
Pinning is currently broken. See issue #1051. This commit introduces
a few more pinning tests. These are by no means exhaustive, but
definitely surface the present problems going on. I believe these
tests are correct, but not sure. Pushing them as failing so that
pinning is fixed in this PR.
make pinning and merkledag.Get take contexts
improve 'add' commands usage of pinning
FIXUP: fix 'pin lists look good'
ipfs-pin-stat simple script to help check pinning
This is a simple shell script to help check pinning.
We ought to strive towards making adding commands this easy.
The http api is great and powerful, but our setup right now
gets in the way. Perhaps we can clean up that area.
updated t0081-repo-pinning
- fixed a couple bugs with the tests
- made it a bit clearer (still a lot going on)
- the remaining tests are correct and highlight a problem with
pinning. Namely, that recursive pinning is buggy. At least:
towards the end of the test, $HASH_DIR4 and $HASH_FILE4 should
be pinned indirectly, but they're not. And thus get gc-ed out.
There may be other problems too.
cc @whyrusleeping
fix grep params for context deadline check
fix bugs in pin and pin tests
check for block local before checking recursive pin
// AddRawLink adds a link to this node
AddRawLink(name string, lnk *Link) error
// Return a copy of the link with given name
GetNodeLink(name string) (*Link, error)