IPNSHostnameOption() touches the URL path only on the way in,
but not on the way out. This commit makes it complete by
touching the following URLs in responses:
- Heading, file links, back links in directory listings
- Redirecting /foo to /foo/ if there's an index.html link
- Omit Suborigin header
License: MIT
Signed-off-by: Lars Gierth <larsg@systemli.org>
Also, now, if ipfs config foo.bar has value of anything that is not map (0, "0", 0.1),
then ipfs config foo.bar.baz now returns an error instead of a panic
License: MIT
Signed-off-by: rht <rhtbot@gmail.com>
This changes the pin behavior. It uses the filenames given through
the api, and allows files to be streamed faltly (not a hierarchy),
which is easier for other things (like vinyl in node-ipfs-api land).
Files can also be entirely out of order, and the garbage intermediate
directories will not be pinned (gc-ed later).
The changes also mean the output of add has changed slightly-- it
no longer shows the local path added, but rather the dag path
relative to the added roots. This is a small difference, but changes
tests.
The dagutils.Editor creates a lot of chaff (intermediate objects)
along the way. Wonder how we might minimize the writes to the
datastore...
This commit also removes the "NilRepo()" part of the --only-hash
mode. We need to store at least in an in-mem repo/datastore because
otherwise the dagutils.Editor breaks.
License: MIT
Signed-off-by: Juan Batiz-Benet <juan@benet.ai>
License: MIT
Signed-off-by: Jeromy <jeromyj@gmail.com>
implement rabin fingerprinting as a chunker for ipfs
License: MIT
Signed-off-by: Jeromy <jeromyj@gmail.com>
vendor correctly
License: MIT
Signed-off-by: Jeromy <jeromyj@gmail.com>
refactor chunking interface a little
License: MIT
Signed-off-by: Jeromy <jeromyj@gmail.com>
work chunking interface changes up into importer
License: MIT
Signed-off-by: Jeromy <jeromyj@gmail.com>
move chunker type parsing into its own file in chunk
License: MIT
Signed-off-by: Jeromy <jeromyj@gmail.com>
up until now there has been a very annoying bug with get, we would
get halting behavior. I'm not 100% sure this commit fixes it,
but it should. It certainly fixes others found in the process of
digging into the get / tar extractor code. (wish we could repro
the bug reliably enough to make a test case).
This is a much cleaner tar writer. the ad-hoc, error-prone synch
for the tar reader is gone (with i believe was incorrect). it is
replaced with a simple pipe and bufio. The tar logic is now in
tar.Writer, which writes unixfs dag nodes into a tar archive (no
need for synch here). And get's reader is constructed with DagArchive
which sets up the pipe + bufio.
NOTE: this commit also changes this behavior of `get`:
When retrieving a single file, if the file exists, get would fail.
this emulated the behavior of wget by default, which (without opts)
does not overwrite if the file is there. This change makes get
fail if the file is available locally. This seems more intuitive to
me as expected from a unix tool-- though perhaps it should be
discussed more before adopting.
Everything seems to work fine, and i have not been able to reproduce
the get halt bug.
License: MIT
Signed-off-by: Juan Batiz-Benet <juan@benet.ai>