Contribute to Open Source. Search issue labels to find the right project for you!

Big is not a function error

  • Version: ipfs 0.28.2
  • Platform: Chrome, seeing this on webpack’s done on: Linux 4.4.0-45-generic #66-Ubuntu SMP Wed Oct 19 14:12:37 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux and on OSX
  • Subsystem: ipfs-bitswap

    Type: Bug

    Severity: Critical


    After clearing out my node_modules to do a fresh install, I’m seeing a crash Uncaught TypeError: Big is not a function at Stats.initialCounters.forEach (VM3189 stat.js:22) at Array.forEach (<anonymous>) at new Stats (VM3189 stat.js:21) at new Stats (VM3188 index.js:32) at new Bitswap (VM3157 index.js:48) at series (VM3156 start.js:41) at eval (VM2557 parallel.js:39) at eval (VM2555 once.js:12) at replenish (VM2569 eachOfLimit.js:61) at iterateeCallback (VM2569 eachOfLimit.js:50)

    Steps to reproduce the error:

    The rough build procerss was: rm package-lock.json and npm_modules npm install; webpack --mode development # of something that does a require('ipfs') Its happening with my repo dweb-transports, and I don’t currently have a workaround since this is deep in the IPFS code.

Updated 24/05/2018 20:38 6 Comments

Implement `defaultHashAlg` property


After merging all formats need to implement a defaultHashAlg property (see the spec for more information).

Instead of opening an issue on every repository, just let people know on this issue that you’re working on it and then link to the PR.

  • [ ]
  • [ ]
  • [ ]
  • [ ]
  • [ ]
  • [ ]
  • [ ]
Updated 07/05/2018 10:15

Remove `values` option from `tree()` implementations


With merging there is no longer an values option for tree(). Hence that should be removed from all format implementations to avoid confusion. A proper replacement for that functionality is passing in the root / into resolve(), which is tracked by

I suggest waiting for to be finished, before working on this issue.

Instead of opening an issue on every repository, just let people know on this issue that you’re working on it and then link to the PR.

  • [ ]
  • [ ]
  • [ ]
  • [ ]
  • [ ]
  • [ ]
  • [ ]
Updated 14/05/2018 13:04 1 Comments

Replace navbar's CSS background with an actual SVG


The “Next” and “Previous” arrows are styled using CSS SVG background. It could be rewritten using native SVGs as children with fill: currentColor. That would allow theming the date picker with:

  color : var(--black-color);

  color : var(--accent-color-light);

And new properties could be added, like navIconPrev and navIconNext.

Updated 06/05/2018 14:20 6 Comments

ipfs.dht.findprovs is passing cb in timeout position to libp2p findProviders


<!– Thank you for reporting an issue.

This issue tracker is for bugs and issues found within the JavaScript implementation of IPFS. If you require more general support please file an issue on our discuss forum.

Please fill in as much of the template below as you’re able.

Version: output of jsipfs version --all if using the CLI or ipfs.version((err, version) => {}) if using the instance. Platform: output of uname -a (UNIX), or version and 32 or 64-bit (Windows). If using in a Browser, please share the browser version as well. Subsystem: if known, please specify affected core module name (e.g Bitswap, libp2p, etc).

If possible, please provide code that demonstrates the problem, keeping it as simple and free of external dependencies as you are able. –>

  • Version: 0.28.2
  • Platform: all
  • Subsystem: dht

<!– Bug, Feature, Question, Enhancement, Etc –>

Type: Bug

<!– One of following: Critical - System crash, application panic. High - The main functionality of the application does not work, API breakage, repo format breakage, etc. Medium - A non-essential functionality does not work, performance issues, etc. Low - An optional functionality does not work. Very Low - Translation or documentation mistake. Something that won’t give anyone a bad day. –>

Severity: High


ipfs.dht.findprovs is passing cb in timeout position to libp2p findProviders, and so the call fails to return anything.

In the dht component in js-ipfs

and the js-libp2p findProviders impl

Steps to reproduce the error:

In a browser with ipfs-companion installed and pointing at an embedded ipfs node: js ipfs.dht.findprovs('QmYPNmahJAvkMTU6tDx5zvhEkoLzEFeTDz6azDCSNqzKkW', console.log) // Error: callback is not a function at Object.findProviders

jsipfs findprovs QmYPNmahJAvkMTU6tDx5zvhEkoLzEFeTDz6azDCSNqzKkW
// returns nothing, but exits cleanly
Updated 18/05/2018 15:56 3 Comments

Separate PlayerId and PlayerNum


Currently we are using the same number for identifying players and numbering them. For now this works out nicely, but with the player number rotations we do assume that player numbers are consecutive and start at 0. This is the case and will probably stay the case, but it is rather an implementation detail than desired behaviour. That is, we probably want to do away with PlayerIds as numbers, and make them opaque tokens. The current role of PlayerId in the planetwars rules would then be taken by a transparent PlayerNum struct, which is analogous to the PlayerId we have now.

So, PlayerId is for identifying players when communicating with the framework, PlayerNum is something internal to planetwars. Are these names confusing or is this fine?

Updated 16/04/2018 10:37

Improve `fetch_remote_file()` to support post requests with other types of request body


The fetch_remote_file() function will currently accept a second parameter which is an array of post data. This data is converted into the form request format (key=val), though there are several web services that expect POST requests with JSON bodies and such.

It would be good to extend this function to support other types of request. There are a few enhancements that would be very nice to include:

  • If the post data parameter is a string, pass that as is as the post body.
  • Support other HTTP request methods such as PUT, PATCH, DELETE, etc.
  • Support setting the request content type.

If there are any other enhancements that could be made, please list them here and I’ll see what I can do.

Updated 05/05/2018 12:52 1 Comments

Arrays in HTTP API responses are not sorted


Somewhat annoying but minor issue.

When making API requests to ipfs-cluster, it seems that the response sometimes changes the order of the arrays. So the first request returns {"peers": ["A", "B"]} and the second one could return {"peers": ["B", "A"]} randomly. This is a bit annoying since responses are harder to cache, since the ordering in arrays are important.

Sorting the arrays before letting the HTTP endpoint respond, would solve this issue.

The endpoints I’ve found so far not properly sorting before responding:

Endpoint Attributes(s) not sorted
/id addresses, cluster_peers_addresses and ipfs.addresses
/allocations Response is non-sorted array
/peers Each peer has the same response from /id so needs sorting on the same attributes

I’m sure I missed others, but I’ve not used all API endpoints.

Updated 05/03/2018 16:35 1 Comments throws error: multihash length inconsistent

  • Version: 0.28.0
  • Platform: Linux 4.13.0-36-generic #40-Ubuntu SMP Fri Feb 16 20:07:48 UTC 2018 x86_64 x86_64 x86_64 GNU/Linu
  • Subsystem: Node.js v6.11.5

Type: Bug

Severity: Medium


Not sure if I’m using the API correctly, but this is what I do:

(1) Create a buffer from a string content. (2) Add a file using the buffer to an IPFS node. (3) Create a CID with the buffer content (with version==1). (4) Fetch the content using the API,

It throws the following exception:

Error: multihash length inconsistent: 0x122017babbbb5ec8d6ec709a5fd1b559d0126cf36486145f84f9c39260ae0d87ab7e
    at Object.decode (/home/harry/js-ipfs/node_modules/multihashes/src/index.js:99:11)
    at Object.validate (/home/harry/js-ipfs/node_modules/multihashes/src/index.js:210:11)
    at Function.validateCID (/home/harry/js-ipfs/node_modules/cids/src/index.js:254:8)
    at new CID (/home/harry/js-ipfs/node_modules/cids/src/index.js:104:9)
    at pathBaseAndRest (/home/harry/js-ipfs/node_modules/ipfs-unixfs-engine/src/exporter/index.js:30:15)
    at module.exports.err (/home/harry/js-ipfs/node_modules/ipfs-unixfs-engine/src/exporter/index.js:47:13)
    at _catPullStream (/home/harry/js-ipfs/src/core/components/files.js:142:7)
    at (/home/harry/js-ipfs/src/core/components/files.js:234:9)
    at (/home/harry/js-ipfs/node_modules/promisify-es6/index.js:32:27)
    at Timeout.setTimeout (/home/harry/js-ipfs/test/core/files-cat.spec.js:82:18)
    at ontimeout (timers.js:386:11)
    at tryOnTimeout (timers.js:250:5)
    at Timer.listOnTimeout (timers.js:214:5)

Steps to reproduce the error:

Code snippets:

  let buffer = Buffer.from(content);
  let mh = multihashing(buffer, 'sha2-256')
  let cid = new CID(1, 'dag-cbor', mh)
  ipfs.files.add(buffer, {}, (err, filesAdded) => {
    path = filesAdded[0].path;
    console.log("Created: path = " + path)
  // ..., (err, data) => {
Updated 16/04/2018 16:06 6 Comments

multicodec needs to be validated


Calling cid.buffer property throws the following exception when version==1:

 Uncaught TypeError: First argument must be a string, Buffer, ArrayBuffer, Array, or array-like object.
  at fromObject (buffer.js:262:9)
  at Function.Buffer.from (buffer.js:101:10)
  at CID.get buffer [as buffer] (/home/harry/.../js-ipfs/node_modules/cids/src/index.js:122:18)

How to reproduce

  • Create a CID object with version==1.
  • Call cid.buffer.
Updated 19/03/2018 17:55 3 Comments

Remove default exports


Default exports was added to spec to make simple interop with commonjs and similar module systems. But for now they introduce even more mess, especially with commonjs. So I suggest just to remove them everywhere.

Without default exports this can be achieved easily js import { DayPicker, DateUtils, LocaleUtils, ModifiersUtils } from 'react-day-picker'; or js const { DayPicker, DateUtils, LocaleUtils, ModifiersUtils } = require('react-day-picker');

Updated 07/05/2018 10:19 6 Comments

ENopen -> ENsaveinpfile reverses per-junction categories listed in [DEMANDS]


Load a model file (which has at least one junction with two demand categories). Save it using ENsaveinpfile. The demand categories for multi-category junctions will be reversed in order, compared to the original file. Repeated parsing/saving will reverse the order again.

This appears to be caused by the difference between how the categories are parsed (Demand linked-list is prepended with successive categories) and how they are written back to the inp file (list is simply traversed in sequence).

Why is this a problem? Because like it or not, the inp file format is eminently hackable - and in so hacking, it is useful to have a method of “normalizing” the format for (i.e.) text-based version control. One would expect that an arbitrary number of open/save operations would be essentially a no-op.

Possible fixes: - reverse the order of serialization by functional recursion of the linked-list traversal - modify how the linked-list is constructed (change to appending instead of prepending)

Updated 09/04/2018 14:59 3 Comments

Better error message when not initialized

  • Version: js-ipfs version: 0.27.7
  • Platform: Linux frea 4.14.0-2-amd64 #1 SMP Debian 4.14.7-1 (2017-12-22) x86_64 GNU/Linux
  • Subsystem: CLI






jsipfs block put errors with an unhandled exception when IPFS wasn’t previously initialized. Instead it should print a nice error that no repository was initialized and calling jsipfs init first might make sense. Example:

$ jsipfs block put
    this._repo.blocks.put(block, callback)

TypeError: Cannot read property 'put' of undefined
    at BlockService.put (/home/vmx/src/pl/js-ipfs/node_modules/ipfs-block-service/src/index.js:64:23)
    at waterfall (/home/vmx/src/pl/js-ipfs/src/core/components/block.js:51:43)
    at nextTask (/home/vmx/src/pl/js-ipfs/node_modules/async/waterfall.js:16:14)
    at next (/home/vmx/src/pl/js-ipfs/node_modules/async/waterfall.js:23:9)
    at /home/vmx/src/pl/js-ipfs/node_modules/async/internal/onlyOnce.js:12:16
    at waterfall (/home/vmx/src/pl/js-ipfs/src/core/components/block.js:31:20)
    at nextTask (/home/vmx/src/pl/js-ipfs/node_modules/async/waterfall.js:16:14)
    at exports.default (/home/vmx/src/pl/js-ipfs/node_modules/async/waterfall.js:26:5)
    at Function.put.promisify (/home/vmx/src/pl/js-ipfs/src/core/components/block.js:28:7)
    at Object.put (/home/vmx/src/pl/js-ipfs/node_modules/promisify-es6/index.js:32:27)

Steps to reproduce the error:

Make sure you don’t have a repository (i.e. rm -Rf ~/.jsipfs). Then run:

jsipfs block put <some-file>
Updated 19/04/2018 14:16 6 Comments

Display a Landing Page on New Installs / Big Updates


As @diasdavid noted in

Also, @lidel pretty soon you can start recommending users to install an IPFS node through station so that they can use ipfs-companion with go-ipfs.

Exciting times! In parallel to js-ipfs effort, good old go-ipfs will become a lot more user-friendly thanks to ipfs-station’s refresh happening there right now.

While we could and probably will recommend it in store listing and on preferences screen, there is an opportunity here for better communication between extension and its users.

(I’m parking this issue here so we have a game plan when Station is ready)

Here’s an Idea: display a landing page to greet new users

tl;dr Detect new install/upgrade via browser.runtime.onInstalled API and OnInstalledReason and display a landing page when certain conditions are met.

New Installs

  • Open a new tab with byte-size information and links about distributed web It is crucial to keep this short, sweet and engaging. Less is more.
  • Detect if they are already running IPFS API on default port and preconfigure extension (read and set local HTTP gateway address, like suggested in #309) to work out-of-the-box 🚀 ✨
  • If IPFS API is offline, inform the user that extension requires a working node to be fully functional and kindly suggest one-click installation of ipfs-station with go-ipfs (think “big green button that says install IPFS Station, and there is a yarn-like kitten asking user to click on it” 🙃 )


  • While we have landing page logic in place, we could inform users about new features in just installed update (think “animated GIF demo of a new feature”)
  • Much lower priority than “on-new-install” This should be non-invasive, so we probably want to do it manually, only on MAJOR and MINOR updates ( and only when a new feature is worth taking user’s time.
Updated 07/03/2018 18:10 9 Comments

Audit all commands for goroutine leaks


We’re leaking goroutines on canceled commands all over the place.

To do this audit, I recommend looking at all channel writes and I mean all.

A quick grep indicates that the following files may have issues (should, at least, be looked at):

  • [x] core/commands/dht.go – #4413
  • [x] core/commands/repo.go – #4413
  • [ ] core/commands/add.go
  • [ ] core/commands/dag/dag.go
  • [ ] core/commands/files/files.go
  • [ ] core/commands/commands.go
  • [ ] core/commands/filestore.go
  • [ ] core/commands/object/object.go
  • [ ] core/commands/pin.go
  • [ ] core/commands/ping.go
  • [ ] core/commands/unixfs/unixfs.go
  • [ ] core/commands/refs.go

Basically, anytime we write to a channel that may block (i.e., doesn’t have a buffer at least as large what we’re writing), we should select on the channel write and <-req.Context().Done().

Updated 14/05/2018 18:42 6 Comments

How to handle packaged modules with webpack


I’m opening this in here as this seems to be how a lot of ipfs modules are being packaged with this tool (I personally had this experience with the js-libp2p module). When using it on the client side with webpack there’s basically three possibilities:

1) Just do require('js-lib2p'). But this assumes that the source code has to be transpiled by webpack, even for this module. As exclude: /node_modules/ is a fairly popular (and sensible) setting in a lot of webpack configs, this would have to be explicitly included. That assumes a certain configuration (basically the same as the config used to build this module). But why would I want to transpile it again, when there seems to be a dist already available? Which brings us to option 2) require('js-lib2p/dist'), which sadly doesn’t work as the dist doesn’t export the generated var but just puts it into the global context (which doesn’t work when compiled with webpack). That just leaves us with option 3) Include the file as a script tag and put it into the window. This might be great for experimenting or debugging but isn’t really great in big production apps.

So my suggestion would be to export the dist module wrapped in a UMD wrapper (like this one: This would not break anything (as it’ll still be on the window, when imported in a script tag) but would work with commonjs and AMD module compilers. I’d happily provide a PR to integrate that.

With webpack it’s just a matter of setting the libraryTarget to the desired value here.

Updated 10/04/2018 08:28 3 Comments

Replace mutt_mktemp() with mkstemp()


Ultimately all the ways in mutt to create a temporary file come down to mutt_mktemp_full() in muttlib.c. The varying part of the name that function creates, not counting totally predictable things like the PID, is a 64 bit random number. That is less randomness than I have in freaking shell scripts, by using the mktemp(1) program with a suffix of 12 Xs (roughly 6*12 = 72 bits), as the Turtle Book [1] recommends. Shouldn’t we use the libc interfaces for making temporary files, which are similar to mktemp(1) and thus allow for more randomness?

[1] Classic Shell Scripting by Robbins & Beebe, O'Reilly Media 2005. See pages 274-278.

Updated 26/03/2018 14:24 16 Comments

Tell users how to use `sonarwhal` when they try to scan `localhost`



This template is for bug reports or suggestions for sonar’s website ( If you have an issue with a URL when using the online scanner, please report it in

  1. Propose a new feature/change:

    Write a short description of your proposal with (if applicable) some examples of the expected behaviour.

  2. Bug report:

    Share the url, OS version, browser version and steps to reproduce.

  3. If you have any questions, please stop by our chatroom:

–> There are few error in database from people trying to scan localhost. We should tell them to use the CLI version

Updated 09/05/2018 23:20 8 Comments

Docker image prone to ipfs daemon to become zombie process


I ran ipfs/ipfs-cluster:v0.1.0 image for some time and ipfs process was memory hungry and eventually got killed (by OOM killer I suppose), now it’s zombie. ``` PID PPID USER STAT VSZ %VSZ CPU %CPU COMMAND 1 0 ipfs S 34932 0% 3 0% ipfs-cluster-service –loglevel debug –debug 624 0 root S 1592 0% 1 0% sh 630 624 root R 1524 0% 3 0% top 19 1 ipfs Z 0 0% 2 0% [ipfs]

`` The way to avoid such behavior is use ethersupervisord or something like that **and** add option to run ONLYipfs-cluster-servicevia IPFS_APIvariable for example (if it's set - sed it inservice.jsonand do not forkipfs daemon`)

Updated 08/03/2018 18:10 4 Comments

Sidebar display multiple (but not all) components.


I have mailboxes in a hierarchy in my IMAP server: e.g. (not the full list): lists lists/neomutt lists/neomutt/devel lists/neomutt/users lists/noaa lists/noaa/changes lists/waze lists/waze/beta lists/exim lists/exim/users lists/exim/announce lists/exim/dev lists/mailman lists/mailman/announce lists/mailman/users lists/openssl lists/openssl/announce lists/openssl/users lists/mutt lists/mutt/users lists/cone lists/lopsa-us-tx-austin lists/ietf lists/ietf/sieve lists/pgsql lists/pgsql/novice lists/pgsql/docs lists/pgsql/announce lists/pgsql/bugs lists/pgsql/hackers lists/pgsql/general lists/pgsql/committers lists/pgsql/sql lists/pgsql/advocacy lists/pgsql/jdbc lists/pgsql/odbc lists/pgsql/www lists/pgsql/jobs lists/SprintBuzz lists/spi lists/spi/private lists/spi/general lists/SpamAssassin lists/SpamAssassin/users lists/SpamAssassin/commits lists/pfsense lists/pfsense/users lists/pfsense/security lists/isoc lists/isoc/pubsoft lists/isoc/memberpubpol lists/surbl lists/surbl/discuss lists/php lists/php/announce lists/freebsd lists/freebsd/ports-developers lists/freebsd/doc-svn lists/freebsd/developers lists/freebsd/ports-commiters lists/freebsd/jobs lists/freebsd/rc lists/freebsd/mobile lists/freebsd/x11 lists/freebsd/ports-bugs lists/freebsd/amd64 lists/freebsd/advocacy lists/freebsd/virtualization lists/freebsd/ports-announce lists/freebsd/wireless lists/freebsd/arch lists/freebsd/scsi lists/freebsd/arm lists/freebsd/net lists/freebsd/fs lists/freebsd/proliant lists/freebsd/stable lists/freebsd/cvs-all lists/freebsd/announce lists/freebsd/current lists/freebsd/security lists/freebsd/java lists/freebsd/usb lists/freebsd/cloud lists/freebsd/ports lists/freebsd/acpi lists/freebsd/hackers

I have the sidebar set to : ``` set sidebar_visible=yes set sidebar_sort_method=alpha set sidebar_width=30 set sidebar_folder_indent=yes set sidebar_short_path=yes set sidebar_indent_string=“..” set sidebar_new_mail_only sidebar_whitelist INBOX mono sidebar_highlight reverse color sidebar_highlight yellow red color progress white red

ctrl-n, ctrl-p to select next, prev folder

ctrl-o to open selected folder

bind index \CP sidebar-prev bind index \CN sidebar-next bind index \CO sidebar-open bind index \CU sidebar-open

b toggles sidebar visibility

macro index b ‘<enter-command>toggle sidebar_visible<enter>’ macro pager b ‘<enter-command>toggle sidebar_visible<enter>’ ```

when it displays, it shows only the last component of the path (users for example).

I’m wondering if there is some way to get it to display the last 2 components.

Updated 13/05/2018 20:11 10 Comments

Make `strict-transport-security` rule take into consideration TLD-level HSTS



… The use of TLD-level HSTS allows such namespaces to be secure by default. Registrants receive guaranteed protection for themselves and their users simply by choosing a secure TLD for their website and configuring an SSL certificate, without having to add individual domains or subdomains to the HSTS preload list. Moreover, since it typically takes months between adding a domain name to the list and browser upgrades reaching a majority of users, using an already-secured TLD provides immediate protection rather than eventual protection. Adding an entire TLD to the HSTS preload list is also more efficient, as it secures all domains under that TLD without the overhead of having to include all those domains individually.

We hope to make some of these secure TLDs available for registration soon, and would like to see TLD-wide HSTS become the security standard for new TLDs.

Updated 22/03/2018 21:07 2 Comments

Keyword for link status in report file is different than the documentation


The EPANET user manual states that the [REPORT] section may include a Position keyword in order to get the links status added to the report file (page 162, also in the Wiki). However, this keyword is not supported in the code and the State keyword should be used.

I think we should keep the code as is and change the documentation accordingly.

Updated 19/03/2018 14:46

Harden http.Serve() with sensible Timeouts


<!– Output From ipfs version --all) –>

Version information: master

<!– Bug, Feature, Enhancement, Etc –>

Type: Enhancement

<!– from P0 “Critical” to P5 “Relatively Unimportant”) –>

Priority: P4


Currently we do:

But shows that it is better to initialize a Server object and set sensible timeouts:


Example: srv := &http.Server{ ReadTimeout: 5 * time.Second, WriteTimeout: 10 * time.Second, IdleTimeout: 120 * time.Second, Handler: serveMux, }

(IdleTimeout is only in Go 1.8)

<!– This is for you! Please read, and then delete this text before posting it. The go-ipfs issues are only for bug reports and directly actionable features. Read if your issue doesn’t fit either of those categories. Read if you are not sure how to fill in this issue. –>

Updated 15/05/2018 18:31 4 Comments

README: what it is, what it does


Since this module is going to be used so widely across IPFS, I expect many current & prospective contributors to be funneled to this repo at one point or another. As such, I’d be really interested in seeing the README be very clear about

  1. what is dignified.js? and
  2. what problems does it solve?

Right now it does a very good job of explaining its subcommands and setting it up, but having Background and Description sections that explain a bit more could save @dignifiedquire the work of explaining things many more times via IRC or elsewhere.

Updated 06/03/2018 11:12 5 Comments

Move the globe to it's own page and repo


As the globe is nice, but impacts battery life on laptops very badly on laptops I suggest extracting it into it’s own repo and publishing it on it’s own. This way it’s still usable but not everybody is forced to use it when they open the webui.

@jbenet we talked about this on the apps hangout so please let us know if you are okay with this.

Updated 17/03/2018 16:11 7 Comments

sharness test command coverage


Making this issue to track all CLI commands that need to be tested and conditions

Module Online Test Offline Test
object t0051 t0051
ls t0045 t0045
cat t0040 t0040
dht t0170 N/A
bitswap t0220 N/A
block t0050
daemon t0030 N/A
init N/A t0020
add t0040 t0040
config t0021 t0021
version t0010
ping N/A
diag t0065, (missing: t0151) t0151
mount t0030 N/A
name t0110 t0100
pin t0080,t0081,t0085
get t0090
refs t0080
repo gc t0080
id t0010
bootstrap t0120 t0120
swarm t0140 N/A
commands t0010

Critically missing:

  • [ ] ipfs ping

Most of the missing tests are just cases where we test online or offline but not both.

Updated 19/03/2018 23:55 8 Comments

The `dag` and `files` distinction


One complicating piece for the user is the distinction of ipfs and ipfs unix. right now ipfs {add, cat, ls} conflate the two. And there may be a future where different datastructures can be mounted to expose things differently.

Overall, we need to go through the codebase and: - [ ] clearly distinguish merkledag and unixfs uses - [ ] when we make tour clearly talk about this - [ ] optimize flow with usability in mind, but not at the cost of expressivity (what you can do) or confusion (is it easy to understand what is going on)

NOTE: the following is speculative. these mount examples are a lot of work and not in our near future. They’re just an example of how the different models work

note well what each of these commands does!!

> ipfs block mount blocks/
> cat blocks/QmNtpA5TBNqHrKf3cLQ1AiUKXiE4JmUodbG5gXrajg8wdv
" Sfl ∂ÎJ8J_gÃH5è"p·¥E}ÜBR–˶™–bar+
" Ä€úU3:?∂,∫€X3T∑ùrtÌ(Fl|≠˝üõbazr+
" ïôbz¢≥UglàÃ∂€`fl˘d8-ÚàÊ°”o°É©foo

# like `ipfs object get`
> ipfs dag mount dag/
> ls dag/QmNtpA5TBNqHrKf3cLQ1AiUKXiE4JmUodbG5gXrajg8wdv
> cat dag/QmNtpA5TBNqHrKf3cLQ1AiUKXiE4JmUodbG5gXrajg8wdv/.ipfs-object
  "Links": [
      "Name": "bar",
      "Hash": "QmTz3oc4gdpRMKP2sdGUPZTAGRngqjsi99BPoztyP53JMM",
      "Size": 12
      "Name": "baz",
      "Hash": "QmX1ebVUtfY11ZCpVmqyE5mDoN62SpLd8eLPpg5GGV1ABt",
      "Size": 114
      "Name": "foo",
      "Hash": "QmYNmQKp6SuaVrpgWRsPTgCQCnpxUYGq76YEKBXuj2N4H6",
      "Size": 12
  "Data": "CAE="
> cat dag/QmNtpA5TBNqHrKf3cLQ1AiUKXiE4JmUodbG5gXrajg8wdv/.ipfs-data

# like `ipfs {cat, ls}` currently work
> ipfs unix mount unix/
> cat unix/QmNtpA5TBNqHrKf3cLQ1AiUKXiE4JmUodbG5gXrajg8wdv
> ls unix/QmNtpA5TBNqHrKf3cLQ1AiUKXiE4JmUodbG5gXrajg8wdv

> ipfs-bitcoin mount bitcoin/
> ls bitcoin/
> ls bitcoin/HEAD
> ls bitcoin/HEAD/transaction/0/inputs
> ls bitcoin/HEAD/transaction/0/outputs
> cat bitcoin/HEAD/transaction/0/full
input 1LaxoTrQy51LnB289VmoSAgN6J6UrJbfL9
output 1QFbx3bKA8LABAGEaSe7EiP9JCxe2j4fN7
output 1taxinvBLwDAb1tjyTYzhcyb1fNKfivAB
output 1FTgzPJCbpCWYfF6VxPdmCMPUDBfygut2h
output 1DffZcvcQue3gn15G9tTshu2Y9YkzEZE2f
output 1HCFDzKx94uwtnvjJEXBZP12dseiNDK6if
Updated 29/03/2018 23:13 10 Comments

Fork me on GitHub