Contribute to Open Source. Search issue labels to find the right project for you!

Running cluster with the wrong configuration path gives misleading error

[16:22:47] ~ $ ipfs-cluster-service -c c2
16:23:09.941 ERROR    service: could not obtain execution lock.  If no other process
is running, remove /home/hector/c2/cluster.lock, or make sure that the config folder is
writable for the user running ipfs-cluster.  Run with -d for more information
about the error lock.go:37
error acquiring execution lock: could not obtain execution lock

The locking attempt should probably say something like the “folder does not exist” and ask the user to check that the cluster is correctly initialized in the given location.

Updated 13/03/2018 09:40 2 Comments

Using a multiaddr with DNS fails connect to IPFS node



Below is the code I’m using to connect to the infura ipfs node. From what I’ve read this is an appropriate use of multiaddr but perhaps I’m wrong and would appreciate understanding how to correctly turn into a multiaddr.

var ipfsAPI = require('ipfs-api');
var Multiaddr = require('multiaddr')

var maString = '/dns4/'; // Is this equivalent to
var ma = Multiaddr(maString);
if (!Multiaddr.isMultiaddr(ma)) {
  console.log('Not a valid multiaddr');

// or connect with multiaddr
var ipfs = ipfsAPI(maString);

var data = Buffer.from('Multiaddrs are cool.', 'utf-8');
ipfs.files.add(data, {}, console.log);

js-multiaddr seems to support dns4 and https but maybe I’ve misunderstood.

Thanks for any help guys!


Updated 16/03/2018 11:08 2 Comments

ipfs-cluster-ctl pin/add pin/rm should have a --wait option


Just like we provide a –no-status option, there should be a –wait option which doesn’t return until the pin has reached PINNED status or has been fully unpinned.

It should do this by regulary querying status <cid> on the pin and waiting for all PINNING to either be PINNED or PIN_ERROR.

Before exiting, it should print the final status.

I think this WaitForPin/Unpin function should be provided as an utility/method in rest/client, and re-used by ipfs-cluster-ctl. Open for proposals but func client.WaitFor(targetStatus, checkInterval) <- channel Status is one option.

Updated 15/03/2018 00:29 4 Comments



@vmx @dryajov ^^

@ya7ya the correct solution is by doing an actual check of the components. It just happens that can’t be trusted thanks to asset tooling like uglify.

Updated 09/03/2018 00:47

Arrays in HTTP API responses are not sorted


Somewhat annoying but minor issue.

When making API requests to ipfs-cluster, it seems that the response sometimes changes the order of the arrays. So the first request returns {"peers": ["A", "B"]} and the second one could return {"peers": ["B", "A"]} randomly. This is a bit annoying since responses are harder to cache, since the ordering in arrays are important.

Sorting the arrays before letting the HTTP endpoint respond, would solve this issue.

The endpoints I’ve found so far not properly sorting before responding:

Endpoint Attributes(s) not sorted
/id addresses, cluster_peers_addresses and ipfs.addresses
/allocations Response is non-sorted array
/peers Each peer has the same response from /id so needs sorting on the same attributes

I’m sure I missed others, but I’ve not used all API endpoints.

Updated 05/03/2018 16:35 1 Comments throws error: multihash length inconsistent

  • Version: 0.28.0
  • Platform: Linux 4.13.0-36-generic #40-Ubuntu SMP Fri Feb 16 20:07:48 UTC 2018 x86_64 x86_64 x86_64 GNU/Linu
  • Subsystem: Node.js v6.11.5

Type: Bug

Severity: Medium


Not sure if I’m using the API correctly, but this is what I do:

(1) Create a buffer from a string content. (2) Add a file using the buffer to an IPFS node. (3) Create a CID with the buffer content (with version==1). (4) Fetch the content using the API,

It throws the following exception:

Error: multihash length inconsistent: 0x122017babbbb5ec8d6ec709a5fd1b559d0126cf36486145f84f9c39260ae0d87ab7e
    at Object.decode (/home/harry/js-ipfs/node_modules/multihashes/src/index.js:99:11)
    at Object.validate (/home/harry/js-ipfs/node_modules/multihashes/src/index.js:210:11)
    at Function.validateCID (/home/harry/js-ipfs/node_modules/cids/src/index.js:254:8)
    at new CID (/home/harry/js-ipfs/node_modules/cids/src/index.js:104:9)
    at pathBaseAndRest (/home/harry/js-ipfs/node_modules/ipfs-unixfs-engine/src/exporter/index.js:30:15)
    at module.exports.err (/home/harry/js-ipfs/node_modules/ipfs-unixfs-engine/src/exporter/index.js:47:13)
    at _catPullStream (/home/harry/js-ipfs/src/core/components/files.js:142:7)
    at (/home/harry/js-ipfs/src/core/components/files.js:234:9)
    at (/home/harry/js-ipfs/node_modules/promisify-es6/index.js:32:27)
    at Timeout.setTimeout (/home/harry/js-ipfs/test/core/files-cat.spec.js:82:18)
    at ontimeout (timers.js:386:11)
    at tryOnTimeout (timers.js:250:5)
    at Timer.listOnTimeout (timers.js:214:5)

Steps to reproduce the error:

Code snippets:

  let buffer = Buffer.from(content);
  let mh = multihashing(buffer, 'sha2-256')
  let cid = new CID(1, 'dag-cbor', mh)
  ipfs.files.add(buffer, {}, (err, filesAdded) => {
    path = filesAdded[0].path;
    console.log("Created: path = " + path)
  // ..., (err, data) => {
Updated 17/03/2018 04:01 5 Comments

multicodec needs to be validated


Calling cid.buffer property throws the following exception when version==1:

 Uncaught TypeError: First argument must be a string, Buffer, ArrayBuffer, Array, or array-like object.
  at fromObject (buffer.js:262:9)
  at Function.Buffer.from (buffer.js:101:10)
  at CID.get buffer [as buffer] (/home/harry/.../js-ipfs/node_modules/cids/src/index.js:122:18)

How to reproduce

  • Create a CID object with version==1.
  • Call cid.buffer.
Updated 19/03/2018 17:55 3 Comments

Update all wiki pages, diagrams and READMEs


Most of our docs are out of date.

In case anyone would like to assist here: feel free to poke the ‘responsible’ people to find out how things work now! These people are myself and @ajuvercr for the backend, and @wschella and @Robbe7730 for the client. While this might not be an ideal starting issue, it might be very good to get an understanding of MOZAIC as a whole, and all help would be greatly appreciated.

Updated 26/02/2018 08:12 2 Comments

Aegir should skip empty commits


The “chore: updat contributors” often ends up empty if there has been no new contributors. Instead of doing a empty commit, aegir should simply skip it.


Updated 19/02/2018 11:45

ENopen -> ENsaveinpfile reverses per-junction categories listed in [DEMANDS]


Load a model file (which has at least one junction with two demand categories). Save it using ENsaveinpfile. The demand categories for multi-category junctions will be reversed in order, compared to the original file. Repeated parsing/saving will reverse the order again.

This appears to be caused by the difference between how the categories are parsed (Demand linked-list is prepended with successive categories) and how they are written back to the inp file (list is simply traversed in sequence).

Why is this a problem? Because like it or not, the inp file format is eminently hackable - and in so hacking, it is useful to have a method of “normalizing” the format for (i.e.) text-based version control. One would expect that an arbitrary number of open/save operations would be essentially a no-op.

Possible fixes: - reverse the order of serialization by functional recursion of the linked-list traversal - modify how the linked-list is constructed (change to appending instead of prepending)

Updated 19/03/2018 14:40 2 Comments

"vagrant up --no-provision" fails in openstack f27 template


vagrant up --no-provision fails with the error below in template Fedora-Cloud-Base-27-1.6 in OpenStack. Error while activating network: Call to virNetworkCreate failed: failed to run '/usr/sbin/dnsmasq --version': : No such file or directory

Full log:

Updated 08/02/2018 18:52 1 Comments

Recursive pinning can lose existing direct pin


<!– Welcome to the go-ipfs bug tracker. This is for you! Please read, and then delete this text before posting it.

If you haven’t yet searched the issue tracker for an existing report concerning your issue, please do so now.

The go-ipfs issues are only for bug reports and directly actionable feature requests. Read if your issue doesn’t fit either of those categories.

If you have a SUPPORT QUESTION, please direct it to our forum at

Read if you are not sure how to fill in this issue. –>

Version information: 0.4.14-dev

<!– Output From ipfs version --all

If your version is older than 0.4.11 please update and re-check if the problem persists. –>

Type: bug

<!– Bug, Feature, Enhancement, Etc –>


<!– This is where you get to tell us what went wrong or what feature you need. When doing so, please make sure to include all relavent information.

When requesting a feature, please be sure to include:

  • Your motivation. Why do you need the feature?
  • How the feature should work.

When reporting a bug, please try to include:

  • What you were doing when you experienced the bug.
  • Any error messages you saw, where you saw them, and what you believe may have caused them (if you have any ideas).
  • When possible, steps to reliably produce the bug. –>

This bit looks like it’ll lose the existing direct pin if the attempt to add the recursive pin fails. The direct pin will have been removed, and the expected recursive pin won’t have succeeded:

directPin.Remove() should probably happen after recursePin.Add()

Updated 26/02/2018 17:42 6 Comments

Support variable substitution in notmuch URIs


I want to set my virtual mailboxes based on the account folder variable I set for each account in neomutt. Something like below:

# Account 1
set realname="My Real Name"
set folder="/home/user/Mail/account1"

virtual-mailboxes Inbox "notmuch://?query=path:$folder/** and tag:inbox"

As you can see I am trying to get the virtual mailbox to use the path filtering based on the $folder configuration variable. Unfortunately it looks like variables are not being substituted in this case.

Updated 30/01/2018 12:21

Ugly formatting for Angular unit tests


Prettier 1.10.2 Playground link sh --arrow-parens

Input: ```jsx import { TestBed, async } from ‘@angular/core/testing’; import { AppComponent } from ‘./app.component’;

describe(‘AppComponent’, () => { beforeEach(async(() => { TestBed.configureTestingModule({ declarations: [AppComponent] }).compileComponents(); })); it(‘should create the app’, async(() => { const fixture = TestBed.createComponent(AppComponent); const app = fixture.debugElement.componentInstance; expect(app).toBeTruthy(); })); it(should have as title 'app', async(() => { const fixture = TestBed.createComponent(AppComponent); const app = fixture.debugElement.componentInstance; expect(app.title).toEqual(‘My First Angular App’); })); it(“should render title in a h1 tag”, async(() => { const fixture = TestBed.createComponent(AppComponent); fixture.detectChanges(); const compiled = fixture.debugElement.nativeElement; expect(compiled.querySelector(‘h1’).textContent).toContain(‘My First Angular App’); })); });


import { TestBed, async } from “@angular/core/testing”; import { AppComponent } from “./app.component”;

describe(“AppComponent”, () => { beforeEach( async(() => { TestBed.configureTestingModule({ declarations: [AppComponent] }).compileComponents(); }) ); it( “should create the app”, async(() => { const fixture = TestBed.createComponent(AppComponent); const app = fixture.debugElement.componentInstance; expect(app).toBeTruthy(); }) ); it( should have as title 'app', async(() => { const fixture = TestBed.createComponent(AppComponent); const app = fixture.debugElement.componentInstance; expect(app.title).toEqual(“My First Angular App”); }) ); it( “should render title in a h1 tag”, async(() => { const fixture = TestBed.createComponent(AppComponent); fixture.detectChanges(); const compiled = fixture.debugElement.nativeElement; expect(compiled.querySelector(“h1”).textContent).toContain( “My First Angular App” ); }) ); });


Expected behavior: I would expect functions wrapped in Angular’s async wrapper to be formatted the same way that native async arrow functions are in unit tests.

In this case, the test names and body should be on one line.

Updated 09/03/2018 14:18 2 Comments

Better error message when not initialized

  • Version: js-ipfs version: 0.27.7
  • Platform: Linux frea 4.14.0-2-amd64 #1 SMP Debian 4.14.7-1 (2017-12-22) x86_64 GNU/Linux
  • Subsystem: CLI






jsipfs block put errors with an unhandled exception when IPFS wasn’t previously initialized. Instead it should print a nice error that no repository was initialized and calling jsipfs init first might make sense. Example:

$ jsipfs block put
    this._repo.blocks.put(block, callback)

TypeError: Cannot read property 'put' of undefined
    at BlockService.put (/home/vmx/src/pl/js-ipfs/node_modules/ipfs-block-service/src/index.js:64:23)
    at waterfall (/home/vmx/src/pl/js-ipfs/src/core/components/block.js:51:43)
    at nextTask (/home/vmx/src/pl/js-ipfs/node_modules/async/waterfall.js:16:14)
    at next (/home/vmx/src/pl/js-ipfs/node_modules/async/waterfall.js:23:9)
    at /home/vmx/src/pl/js-ipfs/node_modules/async/internal/onlyOnce.js:12:16
    at waterfall (/home/vmx/src/pl/js-ipfs/src/core/components/block.js:31:20)
    at nextTask (/home/vmx/src/pl/js-ipfs/node_modules/async/waterfall.js:16:14)
    at exports.default (/home/vmx/src/pl/js-ipfs/node_modules/async/waterfall.js:26:5)
    at Function.put.promisify (/home/vmx/src/pl/js-ipfs/src/core/components/block.js:28:7)
    at Object.put (/home/vmx/src/pl/js-ipfs/node_modules/promisify-es6/index.js:32:27)

Steps to reproduce the error:

Make sure you don’t have a repository (i.e. rm -Rf ~/.jsipfs). Then run:

jsipfs block put <some-file>
Updated 21/03/2018 15:32 6 Comments

LONG_STRING too short for Lua interface


Calling, arg) uses an unreasonably short buffer

  • Expected behavior I’m trying to use the new Lua interface to generate a list of virtual mailboxes. I generate a arguments for virtual-mailboxes and then call the command once with a long string.

  • Actual behavior This works, but without any warning or message only makes about half of the mailboxes. This is due to the limited buffer length for in Lua:

LONG_STRING defined here:

  • Steps to reproduce Eg. run the follwing lua function: ```lua function virtual_mailboxes_first() local maildir = mutt.get(“folder”) local subdirs = io.popen(“find ” .. maildir .. “/ -type d -name cur -printf ‘%h\n’”) local maildirs = {} for subdir in subdirs:lines() do local name = subdir:sub(maildir:len() + 2) table.insert(maildirs, name) end subdirs:close()

    function custom_sort(a, b) local preffered = {“Inbox”, “Flagged”, “Drafts”, “Sent”, “Bin”, “Spam”, “Archive”} local tmp = {} for index, value in ipairs(preffered) do tmp[value] = index end preffered = tmp

      if preffered[a] then
          if preffered[b] then
              return preffered[a] < preffered[b]
              return true
          if preffered[b] then
              return false
              return a < b

    end table.sort(maildirs, custom_sort)

    local result = “” for _, value in ipairs(maildirs) do result = result .. string.format(‘“%s” “notmuch://?query=folder:\”%s\“” \ \n’, value, value) end print(result:len())“virtual-mailboxes”, result) mutt.set(“sidebar_sort_method”, “unsorted”) end ``` The argument is in my case 3742 characters long.

  • Used program versions NeoMutt 20171215

  • Operating System and its version System: Linux 4.14.0-pf9 (x86_64) ncurses: ncurses 6.0.20170902 (compiled with 6.0) libidn: 1.33 (compiled with 1.33) hcache backends: gdbm

  • Suggestions

    • increase buffer size
    • output warning or error message if argument is to long

Will gladly submit a patch if we can agree on a buffer length 😃

PS. yes I am aware that I can call virtual-mailboxes inside the loop.

Updated 31/01/2018 15:22 2 Comments

[browser] <change-dir>.. Resolve Parent Directory


bug reports

Using <change-dir>.. does not resolve the parent directory. Example: it uses /some/folder/../../another/folder instead of /another/folder.

It is not user friendly.

It might cause adverse behavior. (?)

Steps to reproduce

  • First get into the browser.
    • Here is one way to get into the browser. Open neomutt. From a folder with >0 messages, press ’s' to invoke <save-message> and then press ‘?’ to enter the browser.
    • Another way is by attaching a file from the compose menu. Open neomutt. Use the default shortcut ’m' to compose a mail. After exiting your editor, neomutt will be in the compose menu. Use default shortcut ‘a’ to attach file, then press ‘?’ to enter the browser.

From the browser, use the default shortcut ‘c’ to invoke <change-dir>. Then, press ..<enter> to access the parent directory. The browser updates to the parent directory, but the directory used by <change-dir> is not resolved.

  • Example (cwd: /some/folder)

With default shortcuts, you would press c..<enter> You enter the parent directory successfully, but press ‘c’ again to see the new path is unresolved.

  • Expected Directory Output /some/
  • Actual Directory Output /some/folder/../

  • Used program versions 20171215

  • Operating System and its version Linux 4.14.8-gentoo-r1+ (x86_64)


macro browser p "<change-dir>..<enter>" "Goto Parent Directory"

Updated 16/02/2018 16:00 6 Comments

Add wrap-with-directory support (HTTP API and Core)


Related to

Right now, the wrap-with-directory option only works in CLI and support in HTTP API and Core is missing

I already have a working solution that uses the wrap option in js-ipfs-unixfs-engine Importer API.

Question: it is necessary add support to this option in interface-ipfs-core beforehand? (or maybe at least add a tracking issue)

I’m looking forward to open a PR 👍

Updated 16/03/2018 22:08 2 Comments

Paginate changelog


The changelog page is getting pretty long. We should implement pagination.

Also pondering if we should add a filter/dropdown to show changes from a specific month/year but not sure if that really makes sense at the moment.

I believe we’ve already got the component code for pagination so this should be straightforward.

Updated 18/01/2018 21:42

the example readme of "echo" still empty


<!– Thank you for reporting an issue.

This issue tracker is for bugs and issues found within the JavaScript implementation of libp2p. If you require more general support please file an issue on our discuss forum.

Please fill in as much of the template below as you’re able.

Version: package.json version or the commit you have installed. Platform: output of uname -a (UNIX), or version and 32 or 64-bit (Windows). If using in a Browser, please share the browser version as well. Subsystem: if known, please specify affected core module name (e.g Transports, SECIO, etc).

If possible, please provide code that demonstrates the problem, keeping it as simple and free of external dependencies as you are able. –>

  • Version:
  • Platform:
  • Subsystem:

<!– Bug, Feature, Question, Enhancement, Etc –>


<!– One of following: Critical - System crash, application panic. High - The main functionality of the application does not work, API breakage, repo format breakage, etc. Medium - A non-essential functionality does not work, performance issues, etc. Low - An optional functionality does not work. Very Low - Translation or documentation mistake. Something that won’t give anyone a bad day. –>


Very Low


the example readme of “echo” still empty

Steps to reproduce the error:

<!– This is for you! Please read, and then delete this text before posting it. The js-ipfs issues are only for bug reports and directly actionable features.

Read if your issue doesn’t fit either of those categories. –>

Updated 05/02/2018 16:21 1 Comments

Add User Guide LP and Contributor Guide LP to their respective ToC sidebar navigation


Right now the only way to get back to the User and Contributor guide landing pages is through the main navigation or bread crumbs. We should add it to the sidebar with the rest of the section navigation.

It will be taken care of on mobile/smaller screens as I re-do the mobile experience for the sidebar navigation. Will need to add it to the main/larger screen view because I’m planning on using a single dropdown in the mobile experience different from what we currently have.

Updated 02/01/2018 21:53

track character abilities



lich uses infomon.lic and we need a similar system, this will likely exist on the Callbacks system, so that the output is easy to suppress.

Many of these data components exist in <mono> tags

what we need to track: - cmans - shield mans - spells (the initial login only provides Profession spell list or the first minor) - stats - society

Updated 08/01/2018 18:12 1 Comments

Display a Landing Page on New Installs / Big Updates


As @diasdavid noted in

Also, @lidel pretty soon you can start recommending users to install an IPFS node through station so that they can use ipfs-companion with go-ipfs.

Exciting times! In parallel to js-ipfs effort, good old go-ipfs will become a lot more user-friendly thanks to ipfs-station’s refresh happening there right now.

While we could and probably will recommend it in store listing and on preferences screen, there is an opportunity here for better communication between extension and its users.

(I’m parking this issue here so we have a game plan when Station is ready)

Here’s an Idea: display a landing page to greet new users

tl;dr Detect new install/upgrade via browser.runtime.onInstalled API and OnInstalledReason and display a landing page when certain conditions are met.

New Installs

  • Open a new tab with byte-size information and links about distributed web It is crucial to keep this short, sweet and engaging. Less is more.
  • Detect if they are already running IPFS API on default port and preconfigure extension (read and set local HTTP gateway address, like suggested in #309) to work out-of-the-box 🚀 ✨
  • If IPFS API is offline, inform the user that extension requires a working node to be fully functional and kindly suggest one-click installation of ipfs-station with go-ipfs (think “big green button that says install IPFS Station, and there is a yarn-like kitten asking user to click on it” 🙃 )


  • While we have landing page logic in place, we could inform users about new features in just installed update (think “animated GIF demo of a new feature”)
  • Much lower priority than “on-new-install” This should be non-invasive, so we probably want to do it manually, only on MAJOR and MINOR updates ( and only when a new feature is worth taking user’s time.
Updated 07/03/2018 18:10 9 Comments

Audit all commands for goroutine leaks


We’re leaking goroutines on canceled commands all over the place.

To do this audit, I recommend looking at all channel writes and I mean all.

A quick grep indicates that the following files may have issues (should, at least, be looked at):

  • [x] core/commands/dht.go – #4413
  • [x] core/commands/repo.go – #4413
  • [ ] core/commands/add.go
  • [ ] core/commands/dag/dag.go
  • [ ] core/commands/files/files.go
  • [ ] core/commands/commands.go
  • [ ] core/commands/filestore.go
  • [ ] core/commands/object/object.go
  • [ ] core/commands/pin.go
  • [ ] core/commands/ping.go
  • [ ] core/commands/unixfs/unixfs.go
  • [ ] core/commands/refs.go

Basically, anytime we write to a channel that may block (i.e., doesn’t have a buffer at least as large what we’re writing), we should select on the channel write and <-req.Context().Done().

Updated 17/01/2018 00:02 3 Comments

How to handle packaged modules with webpack


I’m opening this in here as this seems to be how a lot of ipfs modules are being packaged with this tool (I personally had this experience with the js-libp2p module). When using it on the client side with webpack there’s basically three possibilities:

1) Just do require('js-lib2p'). But this assumes that the source code has to be transpiled by webpack, even for this module. As exclude: /node_modules/ is a fairly popular (and sensible) setting in a lot of webpack configs, this would have to be explicitly included. That assumes a certain configuration (basically the same as the config used to build this module). But why would I want to transpile it again, when there seems to be a dist already available? Which brings us to option 2) require('js-lib2p/dist'), which sadly doesn’t work as the dist doesn’t export the generated var but just puts it into the global context (which doesn’t work when compiled with webpack). That just leaves us with option 3) Include the file as a script tag and put it into the window. This might be great for experimenting or debugging but isn’t really great in big production apps.

So my suggestion would be to export the dist module wrapped in a UMD wrapper (like this one: This would not break anything (as it’ll still be on the window, when imported in a script tag) but would work with commonjs and AMD module compilers. I’d happily provide a PR to integrate that.

With webpack it’s just a matter of setting the libraryTarget to the desired value here.

Updated 25/01/2018 22:12 2 Comments

Replace mutt_mktemp() with mkstemp()


Ultimately all the ways in mutt to create a temporary file come down to mutt_mktemp_full() in muttlib.c. The varying part of the name that function creates, not counting totally predictable things like the PID, is a 64 bit random number. That is less randomness than I have in freaking shell scripts, by using the mktemp(1) program with a suffix of 12 Xs (roughly 6*12 = 72 bits), as the Turtle Book [1] recommends. Shouldn’t we use the libc interfaces for making temporary files, which are similar to mktemp(1) and thus allow for more randomness?

[1] Classic Shell Scripting by Robbins & Beebe, O'Reilly Media 2005. See pages 274-278.

Updated 16/01/2018 19:45 15 Comments

Faulty 'truncate' option in files_write()


The following code fails: ```python import io import ipfsapi

api = ipfsapi.connect(‘localhost’, 5001)

longtext = ‘one two three’

res = api.files_write(‘/file.txt’, io.BytesIO(str(longtext)), create=True) assert api.files_read(‘/file.txt’) == longtext

shorttext = ‘this’

res = api.files_write(‘/file.txt’, io.BytesIO(str(shorttext)), truncate=True) assert api.files_read(‘/file.txt’) == shorttext `` The second assertion fails withyes two three== yes`

This error is seems to be specific to the API implementation, as the command line equivalent works just fine: bash $ echo 'one two three' | ipfs files write /file.txt --create $ echo 'yes' | ipfs files write /file.txt --truncate $ ipfs files read /file.txt yes

Updated 25/01/2018 07:13 2 Comments

Add CocoaPods support


Context 🕵️‍♀️

Support installing the tool with CocoaPods. Other tools like SwiftLint supports CocoaPods,

What 🌱

The binary gets installed by CocoaPods and developers can easily use the binary from a project build phase.

Proposal 🎉

Add a .podspec and update the release process to generate the CocoaPods binary.

Updated 31/12/2017 21:54 3 Comments

Remove inline styling from js to accommodate standard CSP policies


This line of code violates standard CSP policies due to the inline style being set: js this.passedElement.setAttribute('style', 'display:none;');

What developers can do instead is specify the hiddenState class with one that has a display: none; rule when creating a Choices input. As there is an operation that occurs a few lines before: js // Hide passed input this.passedElement.classList.add( this.config.classNames.input, this.config.classNames.hiddenState, );

Is there a case that requires this styling to be set here that I’m failing to consider?

Updated 04/01/2018 14:02 2 Comments

Translate user interface to your preferred language!


Just go to and start translating!

If your language is not present yet, create a new issue requesting it :+1:

Same if anyone want to become a proofreader for a specific language:

Difference between translators and proofreaders:

  • translators can suggest translations for empty or translated strings and vote on existing ones
  • proofreader can translate and suggest (of course) but will lose their Voting feature to get a more powerful “Proofreader ” button that allows them to approve or reject translations

Don’t worry if GitHub does not reflect translations added at Crowdin: translations are merged manually before every release.

It is a good idea to opt-in for email notifications about any new strings in future:

Thanks again for your translations!

Updated 07/03/2018 18:11

Docker image prone to ipfs daemon to become zombie process


I ran ipfs/ipfs-cluster:v0.1.0 image for some time and ipfs process was memory hungry and eventually got killed (by OOM killer I suppose), now it’s zombie. ``` PID PPID USER STAT VSZ %VSZ CPU %CPU COMMAND 1 0 ipfs S 34932 0% 3 0% ipfs-cluster-service –loglevel debug –debug 624 0 root S 1592 0% 1 0% sh 630 624 root R 1524 0% 3 0% top 19 1 ipfs Z 0 0% 2 0% [ipfs]

`` The way to avoid such behavior is use ethersupervisord or something like that **and** add option to run ONLYipfs-cluster-servicevia IPFS_APIvariable for example (if it's set - sed it inservice.jsonand do not forkipfs daemon`)

Updated 08/03/2018 18:10 4 Comments

Make `strict-transport-security` rule take into consideration TLD-level HSTS



… The use of TLD-level HSTS allows such namespaces to be secure by default. Registrants receive guaranteed protection for themselves and their users simply by choosing a secure TLD for their website and configuring an SSL certificate, without having to add individual domains or subdomains to the HSTS preload list. Moreover, since it typically takes months between adding a domain name to the list and browser upgrades reaching a majority of users, using an already-secured TLD provides immediate protection rather than eventual protection. Adding an entire TLD to the HSTS preload list is also more efficient, as it secures all domains under that TLD without the overhead of having to include all those domains individually.

We hope to make some of these secure TLDs available for registration soon, and would like to see TLD-wide HSTS become the security standard for new TLDs.

Updated 22/03/2018 21:07 2 Comments

Implement the `jsipfs ping` command



We want to support the same command that go-ipfs supports: jsipfs ping.

From go-ipfs: ``` » ipfs ping –help USAGE ipfs ping <peer ID>… - Send echo request packets to IPFS hosts.

SYNOPSIS ipfs ping [–count=<count> | -n] [–] <peer ID>…


<peer ID>… - ID of peer to be pinged.


-n, –count int - Number of ping messages to send. Default: 10.


‘ipfs ping’ is a tool to test sending data to other nodes. It finds nodes via the routing system, sends pings, waits for pongs, and prints out round- trip latency information. ```

Updated 21/02/2018 09:42 8 Comments

Can we add one more option?


Right now choices have an option to prepend something to the value of the added options. That is great but will not distinguish be cases between programmatically added options or manually added items. In our use case we need to prepend something to the value only for manually added items. So the request is to add one more option, e.g. ‘prependUserAddedValue: null` which will prepend something to the value only for manually added items.

The use case is demonstrated here:

@jshjohnson let me know if this is something that you’ll consider merging, so I can come up with a PR

Updated 12/01/2018 22:23 5 Comments

Table of contents as `dl`?


I’m wondering: Does it make sense to use dl, dt, dd in lieu of div & p? The semantics might be better and it might provide opportunities to turn it all into a tree menu in the future.

Also recommend Title Case for the labels.

<dl class="module module--secondary table-of-contents" role="navigation">
    <dt class="toc-section-title--active">Collectors</dt>
        <ul class="toc-subsection-title">
                <a href="/docs/developer-guide/collectors/how-to-develop-a-collector.html" class="toc-subsection-title--active">
                    How to develop a collector
    <dt class="toc-section-title--active">Rules</dt>
        <ul class="toc-subsection-title">
                <a href="/docs/developer-guide/rules/how-to-test-rules.html" class="toc-subsection-title">
                    How to test rules
    <dt class="toc-section-title--active">Events</dt>
        <ul class="toc-subsection-title">
                <a href="/docs/developer-guide/events/list-of-events.html" class="toc-subsection-title">
                    List of events emitted by a collector
Updated 26/01/2018 23:37

Keyword for link status in report file is different than the documentation


The EPANET user manual states that the [REPORT] section may include a Position keyword in order to get the links status added to the report file (page 162, also in the Wiki). However, this keyword is not supported in the code and the State keyword should be used.

I think we should keep the code as is and change the documentation accordingly.

Updated 19/03/2018 14:46

Harden http.Serve() with sensible Timeouts


<!– Output From ipfs version --all) –>

Version information: master

<!– Bug, Feature, Enhancement, Etc –>

Type: Enhancement

<!– from P0 “Critical” to P5 “Relatively Unimportant”) –>

Priority: P4


Currently we do:

But shows that it is better to initialize a Server object and set sensible timeouts:


Example: srv := &http.Server{ ReadTimeout: 5 * time.Second, WriteTimeout: 10 * time.Second, IdleTimeout: 120 * time.Second, Handler: serveMux, }

(IdleTimeout is only in Go 1.8)

<!– This is for you! Please read, and then delete this text before posting it. The go-ipfs issues are only for bug reports and directly actionable features. Read if your issue doesn’t fit either of those categories. Read if you are not sure how to fill in this issue. –>

Updated 22/02/2018 12:54 2 Comments

README: what it is, what it does


Since this module is going to be used so widely across IPFS, I expect many current & prospective contributors to be funneled to this repo at one point or another. As such, I’d be really interested in seeing the README be very clear about

  1. what is dignified.js? and
  2. what problems does it solve?

Right now it does a very good job of explaining its subcommands and setting it up, but having Background and Description sections that explain a bit more could save @dignifiedquire the work of explaining things many more times via IRC or elsewhere.

Updated 06/03/2018 11:12 5 Comments

Move the globe to it's own page and repo


As the globe is nice, but impacts battery life on laptops very badly on laptops I suggest extracting it into it’s own repo and publishing it on it’s own. This way it’s still usable but not everybody is forced to use it when they open the webui.

@jbenet we talked about this on the apps hangout so please let us know if you are okay with this.

Updated 17/03/2018 16:11 7 Comments

sharness test command coverage


Making this issue to track all CLI commands that need to be tested and conditions

Module Online Test Offline Test
object t0051 t0051
ls t0045 t0045
cat t0040 t0040
dht t0170 N/A
bitswap t0220 N/A
block t0050
daemon t0030 N/A
init N/A t0020
add t0040 t0040
config t0021 t0021
version t0010
ping N/A
diag t0065, (missing: t0151) t0151
mount t0030 N/A
name t0110 t0100
pin t0080,t0081,t0085
get t0090
refs t0080
repo gc t0080
id t0010
bootstrap t0120 t0120
swarm t0140 N/A
commands t0010

Critically missing:

  • [ ] ipfs ping

Most of the missing tests are just cases where we test online or offline but not both.

Updated 19/03/2018 23:55 8 Comments

Fork me on GitHub