Contribute to Open Source. Search issue labels to find the right project for you!

Create 'cli-build-widget'


We should consider creating a cli-build-widget repo/package. This package could do things such as:

  • create custom element files for a widget (if not found) and possibly update if needs be
  • re-run the css typings generation for the widget
Updated 27/03/2017 12:33

Consider renaming this repo and the published package to 'cli-build-app'


It has been the source of some confusion that the repo is called cli-build but the package is called cli-build-webpack.

I think we should isolate ourselves from a specific implementation (in this case webpack) and rename the repo to cli-build-app and use this name for the published package.

As an aside, we may in the future, want to provide a way to build widgets rather than a whole app, so if we created a cli-build-widget then the proposed rename is in line with this.

Updated 27/03/2017 12:29

Let "ipfs repo stat" include StorageMax as set in the configuration (in bytes)


<!– Output From ipfs version --all) –>

Version information: master

<!– Bug, Feature, Enhancement, Etc –>

Type: Feature

<!– from P0 “Critical” to P5 “Relatively Unimportant”) –>

Priority: P5


I’m not sure this is a good idea but maybe I’m wrong or there’s a better approach.

The underlying problem is to know how many more bytes an ipfs node can store.

ipfs repo stat provides RepoSize and ipfs config show provides StorageMax, so it’s just matter of doing the math.If StorageMax came included in repo stat, then it would:

  • Save 1 api call
  • Save parsing of the configuration
  • Save converting human-readable configuration values to bytes (extra dependency)

But as I say, I’m not sure the repo stat information is the place for a GC-related configuration option.

<!– This is for you! Please read, and then delete this text before posting it. The go-ipfs issues are only for bug reports and directly actionable features. Read if your issue doesn’t fit either of those categories. Read if you are not sure how to fill in this issue. –>

Updated 27/03/2017 12:24

Tensor Wrapper


I am thinking about introducing some kind of tensor wrapper to the core classes that would be aware of the shape names. E.g., in case of the SentenceEncoder input where both of the dimensions are None, we could write that the first dimension is batch size and the second dimension is sentence length, etc.. It could be used for more detailed shape assertion and hopefully better configuration debugging and debugging of newly added components.

What do you think about such a thing? Would be useful? If yes, would it be worth doing or is it too laborious and pedantic? Wouldn’t add a redundant conceptual barrier to potential contributors?

Updated 27/03/2017 09:02

Code template closure


Closure should be compiled in subcode mode (like with branching operator)


finally (n int)
    [ console writeLine:"Finally works!"  ].  // closure compiled as lazy expression.


finally (n int)
    [ console writeLine:"Finally works!".  ]. // closure compiled correctly
Updated 27/03/2017 07:53

Deprecate WebContext.setSessionStore


Since it makes no sense for the session store associated with a WebContext instance to change during the lifetime of the WebContext, there should not be a setter for this. All it does is create API noise, and the only sensible implementation is an UnsupportedOperationException.

It’s reasonable (and actually should be enforced) that the SessionStore should be set during the construction of the WebContext instance and not be changed during the lifetime of the instance. So we should deprecate this method in 2.0 and remove it in 3.0.

I’m happy to hear any use-cases where we think we should change the SessionStore during the lifetime of the WebContext and be persuaded that I’m wrong.

Updated 27/03/2017 09:32 2 Comments

Outgoing middleware design broken


Currently the outgoing middleware has all the parts it needs internally. but, the Process.SendUserMessage has the signature: (PID pid, object message, PID sender) which does not allow us for sending outgoing headers. And if we try to send a MessageEnvelope, things get double wrapped if there is a sender as the LocalProcess checks for sender and wraps the current message in another MessageEnvelope.


We should change the signature of SendUserMessage to just (PID pid, object message) and do the envelope wrapping already in Request

Updated 27/03/2017 07:16

Overwrite behavior


Currently I simply overwrite the buffer and nothing else. This is done in this way to avoid memory leaks and other kind of issues that might appear if i change the object buffer size, this leads us to get the ‘\0x0\0x0…’ string. With this in mind, here is some ideas about the current situation: - Make a overwrite-and-get-new-string: So we get a correct “empty” string: “”. This might add an extra decrement of the reference of the object. - Let the ‘\0x0…’ format as a kind-of-debug option, to double check the data

Updated 27/03/2017 05:32

Generalize ideas and target features


This is an umbrella issue to list features this project should aim to implement at first.

Authenticate with GitHub

  • [ ] Authenticate with GitHub, save tokens to ~/.git-web.json


  • [ ] List all open issues on a repo
    • [ ] Or those assigned to your user
    • [ ] Or those assigned to any specific user
  • [ ] List all open issues across all repos on your user
  • [ ] List all open issues across a specified user/organization

Pull requests

  • [ ] List all pull requests on a repo
    • [ ] Or those assigned to your user
    • [ ] Or those assigned to any specific user
    • [ ] Or those with reviews assigned to your user
  • [ ] List all open pull requests across all repos on your user
  • [ ] List all open pull requests across a specified user/organization

Working copy

  • [ ] Implement tricks from Oh, shit! Git!
    • [ ] In particular, “I accidentally committed something to master that should have been on a brand new branch!”
    • [ ] And “I accidentally committed to the wrong branch!”
Updated 27/03/2017 03:53

Contributors listing.


I’m thinking about creating a way to list contributors(translators and reviewers).

  • :1st_place_medal: create a file unique on root of the translation repo listing contributors for each language
  • :2nd_place_medal: create a file for each translation following the idea above
  • :3rd_place_medal: add it in the end of each section.

react to this comment with one of the emojis above to “vote”… if you have any suggestion to be added, share with us :smile:

Updated 27/03/2017 02:51

dva 2.0


项目空点了,结合之前接收到的信息和自己的想法,列了 dva@2.0 的考虑如下。欢迎讨论。

  • 尽量兼容 dva 1.0,规则简单的 BreakChange,提供 CodeMod 一键升级
  • 独立的数据流方案,不强绑 React 或其他 view 库,不强绑 Router 库,但容易和现有绑定结合,#530
  • 内置合理的性能优化方案,比如 Reselect 的 memoization,不需要用户写额外的代码
  • 让 reducer 和 selector 更紧密地结合,并且让他们更容易写到一起。绑到一起,更容易写,更好组合
  • 更友好地进行错误提示,#436 #416
  • 更直观地处理 Code splitting
  • 更优雅地在 View 处理 Effect 的回调,#175
  • 更优雅的 HMR 方案,#469
  • 考虑 Model 的扩展和重用,比如:dva-model-extend
  • Code Style:重写 Plugin 实现,拆分模块等
Updated 27/03/2017 08:26 9 Comments

Suggestion to keep the behavior of ; and , keys in normal mode.



Just tried space-vim and love it so far. One behavior that’s quite unintuitive is that ; and , keys are remapped in normal mode.

Personally (I guess most other Vimmers too), I frequently use f/F/t/T for navigation. The keys ; and , are essential for those movements. Remapping both ; and , keys makes space-vim not very vim like.

Currently, I unmapped those mappings for myself, but I thought this config might be unintuitive for other vimmers as well.

Have you considered keeping the behavior of ; and , in normal mode?


Updated 27/03/2017 08:44 3 Comments

Roslyn Coding Guidelines


The current coding guidelines are C-centric, with no or very little guideline for contributions, for projects within the .NET Foundation that use the language within their project. Thus lack of clear guidelines has and will cause friction between different opinions on stylistic aspects of the code / contribution, rather that the functional / semantic behavior of the code.

I’m working on implementing a VBnet Coding guideline. (Here)

Updated 26/03/2017 22:36 1 Comments

Evaluate Graph Database for Persistence


Find out if a graph database like neo4j is better suited to store coderadar’s data.

In the current relational database, each commit is associated via join table with each file that was part in that commit, which results in a very large amount of join table entries (~ 70 million entries for a month of commits in a project with ~ 1 million LOC and ~ 30 committers). The large number of entries makes querying the metric values very slow, even when using indices.

Updated 26/03/2017 21:01 1 Comments

Grand Vision


Headmaster is a bot who’s purpose is to help manage Elixir School. As part of this role, Headmaster is expected to perform a number of activity when actions are taken on the repository:

  • When a Pull Request is opened, auto-assign reviewers based on the language.
    • This requires Headmaster maintain a list of languages and their translators.
  • When a Pull Request/Issue is opened, attempt to assign the appropriate labels.
    • If it’s a non-english lesson we can assign “translation” to it. If it touches many lessons, it’s likely a “fix”, etc.
  • Merge Pull Requests when approved by a contributor.
  • Headmaster should tweet all changes to the repository and give the contributor credit.
    • This will require Headmaster maintain a list of contributors and their Twitter handles.
    • Headmaster should request a contributor’s Twitter handle if it’s unknown.
    • Message templates could be translated to tweet in the contributor’s native language

Once the entirety of the scope has been fleshed out, we’ll create individual tasks to be completed.

Updated 26/03/2017 17:54 1 Comments

How can I intergrate other OpenGL render library into godot?


Operating system or device - Godot version: Godot 3.0 with GLES3

Issue description: <!– What happened, and what was expected. –>

I’m trying to intergrate NanoVG as a Control to draw vertor graphics like this.

#include "nano_canvas.h"
#include <thirdparty/glad/glad/glad.h>
#include "nanovg/nanovg_gl.h"

NanoCanvas::NanoCanvas():Control() {
    m_pNVGContext = nvgCreateGLES3(NVG_ANTIALIAS);

void NanoCanvas::_notification(int p_notification) {
    switch (p_notification) {
        case NOTIFICATION_DRAW: {
                nvgRect(m_pNVGContext, 100,100, 120,30);
                nvgFillColor(m_pNVGContext, nvgRGBA(255,192,0,255));

But I get some render errors below: ``` mesege ERROR: gl_debug_print: GL ERROR: Source: OpenGL Type: Error ID: 20 Severity: High Message: GL_INVALID_OPERATION in glVertexAttribPointer(no array object bound) At: drivers/gles3/rasterizer_gles3.cpp:128. mesege ERROR: gl_debug_print: GL ERROR: Source: OpenGL Type: Error ID: 20 Severity: High Message: GL_INVALID_OPERATION in glVertexAttribPointer(no array object bound) At: drivers/gles3/rasterizer_gles3.cpp:128. Error 00000502 after convex fill mesege ERROR: gl_debug_print: GL ERROR: Source: OpenGL Type: Error ID: 20 Severity: High Message: GL_INVALID_OPERATION in glDrawArrays(no VAO bound) At: drivers/gles3/rasterizer_gles3.cpp:128. mesege ERROR: gl_debug_print: GL ERROR: Source: OpenGL Type: Error ID: 20 Severity: High Message: GL_INVALID_OPERATION in glDrawArrays(no VAO bound) At: drivers/gles3/rasterizer_gles3.cpp:128. mesege ERROR: gl_debug_print: GL ERROR: Source: OpenGL Type: Error ID: 20 Severity: High Message: GL_INVALID_OPERATION in glUniform(program not linked) At: drivers/gles3/rasterizer_gles3.cpp:128. mesege ERROR: gl_debug_print: GL ERROR: Source: OpenGL Type: Error ID: 20 Severity: High Message: GL_INVALID_OPERATION in glUniformMatrix(program not linked) At: drivers/gles3/rasterizer_gles3.cpp:128. mesege ERROR: gl_debug_print: GL ERROR: Source: OpenGL Type: Error ID: 20 Severity: High Message: GL_INVALID_OPERATION in glUniformMatrix(program not linked) At: drivers/gles3/rasterizer_gles3.cpp:128. mesege ERROR: gl_debug_print: GL ERROR: Source: OpenGL Type: Error ID: 20 Severity: High Message: GL_INVALID_OPERATION in glUniform(program not linked) At: drivers/gles3/rasterizer_gles3.cpp:128. mesege ERROR: gl_debug_print: GL ERROR: Source: OpenGL Type: Error ID: 20 Severity: High Message: GL_INVALID_OPERATION in glUniformMatrix(program not linked) At: drivers/gles3/rasterizer_gles3.cpp:128. mesege ERROR: gl_debug_print: GL ERROR: Source: OpenGL Type: Error ID: 20 Severity: High Message: GL_INVALID_OPERATION in glUniformMatrix(program not linked) At: drivers/gles3/rasterizer_gles3.cpp:128. mesege ERROR: gl_debug_print: GL ERROR: Source: OpenGL Type: Error ID: 20 Severity: High Message: GL_INVALID_OPERATION in glUniform(program not linked) At: drivers/gles3/rasterizer_gles3.cpp:128. mesege ERROR: gl_debug_print: GL ERROR: Source: OpenGL Type: Error ID: 20 Severity: High Message: GL_INVALID_OPERATION in glUniformMatrix(program not linked) At: drivers/gles3/rasterizer_gles3.cpp:128. mesege ERROR: gl_debug_print: GL ERROR: Source: OpenGL Type: Error ID: 20 Severity: High Message: GL_INVALID_OPERATION in glUniformMatrix(program not linked) At: drivers/gles3/rasterizer_gles3.cpp:128. mesege ERROR: gl_debug_print: GL ERROR: Source: OpenGL Type: Error ID: 20 Severity: High Message: GL_INVALID_OPERATION in glUniform(program not linked) At: drivers/gles3/rasterizer_gles3.cpp:128. mesege ERROR: _gl_debug_print: GL ERROR: Source: OpenGL Type: Error ID: 20 Severity: High Message: GL_INVALID_OPERATION in glUniformMatrix(program not linked) At: drivers/gles3/rasterizer_gles3.cpp:128.


I would like to know is it possible to intergrate other render librarys and how to do that.

Updated 26/03/2017 17:04

Проработка архитектуры


Нужно напряч извилины и родить архитектуру для обработки запроса от клиента в клиенском приложении (парсинг пакета данных, авторизация) - передача ядру системы результатов, хранение ключей. Вопрос требует серьезной проработки.

Updated 26/03/2017 19:04 2 Comments

Checking for WebSockets constructor name fails under uglification


Currently to add the WebSockets transport index.js does a check for the being ‘WebSockets’. This fails under uglification unless ‘WebSockets’ is made a reserved word in the uglify config, as constructor names are one of the things uglify removes. I recommend that either this is documented, or a different solution is devised; there is already a TODO in the code suggesting that this is found. For example, we could standardise some kind of boolean flag in a transport class which tells libp2p to include it even if there are no listening addresses. That wouldn’t be websocket-specific, and wouldn’t require checking the name against a string.

Updated 26/03/2017 18:14 1 Comments

Introduce syntax for defining generic type argument constraints that would allow all implementations of a particular generic interface regardless to the interface type parameters used and enable the latter to be used in the resulting generic definition.


The task

Introduce syntax for defining generic type parameters constraints that would allow all implementations of a particular generic interface regardless to specific types used in the interface implementation (without the requirement for them to be specified) but make those available for referencing in the generic type definition (implicitly import type parameters of generic interfaces used as a new generic type parameters constraints into the generic type definition scope).


What I mean is to make it possible to define, for example, a generic type ClassB<TA> that would let the TA type parameter to be assigned any type implementing a generic interface IInterfaceA<TB> in whatever a way [the interface definition allows], leaving the the actual type (being used as TA) to be specified at the instance definition place only, without including TB as a parameter in the signature of ClassB itself (what, I believe, means letting to avoid requiring unnecessary [for maintaining strong-typing non-ambiguity] limitations to be set and reducing definition redundancy) while, at the same time, keeping it possible to implement the resulting class logic still leveraging full awareness of what types are actually being used and being able to access (unless restricted another way) the members defined with these types freely.

E.g. something like the following (the example has been made as simplistic as it could be made to provide a self-explanatory C#-alike pseudo-code illustration for the very concept of the functionality being offered for consideration in the most pure manner possible and is not meant to demonstrate potential benefit from its possible usage in a practical use case nor to be comparable with currently available C# compiler implementations) but syntactically and semantically correct:

public interface IInterfaceA<TB> {

TB TheProperty {get; set;}


public class ClassB<TA> where TA : IInterfaceA<TB> {

TA TheProperty {get; set;}

TB ThePropertyProperty => TheProperty.TheProperty;


Given this example it can probably be worth emphasizing on that a particular type to be assigned to the TA parameter at the instance definition time can happen to be a non-generic class, a completely-defined implementation of IInterfaceA<TB> excepting no type arguments already, I believe this remark can explain why a less-verbose alternative to the ClassB<TA, TB> where TA : IInterfaceA<TB> syntax mentioned below can be useful at all.

Potential problems

As @vyrp has suggested at an ambiguity can take place if one class implements the same generic interface (the one we require) twice with different type parameters.

Existing alternative

As for now I am using an alternative definition similar to the following as a work-around to solve the same task I would like to use the above-described feature to:

public class ClassB<TA, TB> where TA : IInterfaceA<TB> {

TA TheProperty {get; set;}

TB ThePropertyProperty => TheProperty.TheProperty;


This gets the job done a way but I would like to humbly offer an idea of introducing what can be potentially considered a more elegant syntax for this semantics expression for the team consideration.

Updated 26/03/2017 22:34 1 Comments

chore: state management


currently we assume shared mutable states by passing objects into components that share objects (e.g. all person related components).

It is difficult to make sure that all components are updated when a state has changed (edit person name, add new interaction with due date, etc).

A better approach is to use services that maintain local state, using RxJS Observables (e.g. interaction.service.ts).

Another option is to use the Redux pattern, which would also make hybrid apps easier to write.

Updated 26/03/2017 05:30 1 Comments

Completion improvement

  1. Note for contribution to add item kind for :repl-completions command. Also, need to eliminate some speed overhead.

Currently, it will only suggest bare strings for auto-completion without any extra information e.g. item kind which mentioned above.

  1. Support auto-completion for all words in the open editor like javascript here.
Updated 27/03/2017 10:25 1 Comments

[universal-redux] Client side data fetching


The static http method on a component to fetch data for server renders is pretty nifty to use. However, they are not invoked when the initial render has finished and routes change on the client side. An out of the box implementation of such a feature would be handy.

Something on these lines perhaps?

Do let me know if this makes sense to you and I’d be happy to submit a PR.

Updated 26/03/2017 20:40 1 Comments



<!– 1. 官方 Issue 只用于报告 bugfeature 建议。使用与技术咨询类建议到 Stack OverflowSegment Fault 等第三方社区提问,官方暂无足够精力提供此类服务,感谢您的理解。 2. 提问前建议先搜索以下资料 官网文档issue列表。 3. 建议使用英文提问,便于讨论被更多的人阅读和回答。如果表达上确实较复杂,可以使用英文标题加中文描述。 4. 报告 bug 时请务必按照最下方的模板书写,并尽可能提供源代码、复现步骤、复现演示、GIF 演示等。 5. 若粘贴源码,尽量避免截图,建议使用Markdown代码格式化。关于如何在 Markdown 中书写代码可以参考文档 6. 如果您提供可复现的在线 Demo,将便于我们快速定位问题。请参考此例 codepen –>


我在尝试结合使用 next.js 和 ant-design-mobile。


以下列出的组件是需要hacker手段后才能正确渲染,本来只有前6个,因为 这次提交 增加到10个。




Environment(required) | 环境(必填)

  • antd-mobile version(antd-mobile版本): 1.0.7
  • Browser (or mark react-native) and its version(浏览器或react-native版本):
  • Operating environment (e.g. OS name) and its version(操作系统版本或设备型号):

What did you do? Please provide steps to re-produce your problem.(请提供复现步骤)

<!– e.g. What components are imported and what configurations are made(例如:引入了什么组件,传入了何种参数和配置–>

  • [ ] Menu
  • [ ] ImagePicker
  • [ ] Switch
  • [ ] Grid
  • [ ] Modal
  • [ ] RefreshControl
  • [ ] Checkbox
  • [ ] Picker
  • [ ] Radio
  • [ ] List

What do you expected?(预期的正常效果)

<!– e.g. It works fine as official website(例如:期望像某个官方demo一样正常展示)–>


What happen?(发生了何种非正常现象)

<!–e.g. Style is not as expected。And it will be better to provide screenshot(例如:样式显示不正常,建议提供屏幕截图)–>

找不到全局变量:window document navigator

Re-producible online demo (可复现的在线demo)

<!– Compile or tool configuration issues: Put the sample code on GitHub/, so we can figure it out.(构建工具或环境配置类问题请提供 github 项目示例源码偏于排查)

Runtime issues: Please fork this Codepen to re-produce you issue.(其他问题建议fork 此 在线demo ) –>

Updated 26/03/2017 15:20 11 Comments

[Feature request] Injectable Contract result


I see a lot of TRB uses as follows.

step :check_funds!

def check_funds!(options, model:, **)
  options["contract.default"].errors(:base).add ..

Maybe an injectable, contract-independent result object would be handy, one that gets passed through the entire operation and is then injected into the contract when needed.

Updated 27/03/2017 01:51 2 Comments

Prelude and jQuery


I don’t know if this so much an issue as a question/concern.

Prelude replaces core WP jQuery. A lot of people see that as a huge no-no. eg:

I see that the wrap for using the Google library version checks for admin, but that causes me to pause and consider, “why then?” If js must be written for two versions of jQuery, why not allow the default library to load?

I’d love to hear your thoughts on this.

Updated 27/03/2017 00:14 3 Comments

Report Creation


We never decided where in the app reports would be created. This is the place to have that discussion.

Current ideas are as follows:

  • Create reports from the activity page. The advantage here is that reports are related to the functionality in the activity page.
  • As suggested by @StephanieKeck, we could also create reports on their own separate page. The reasoning is that one of the main things users would be doing on this app is creating reports, so we would want them to be able to access this functionality easily.
Updated 25/03/2017 20:26 2 Comments

Some thoughts towards rendering support


This is regarding to most of the issues I see for RPCS3 on github

Although I think its great that support for multiple back end is great - I believe that to see great progress in terms of performance, there should be a focus on specific renderer such as the decision made by the developers of star-citizen. Its seems more feasible to support something that gives you the best performance in the long run similar to the idea of why integrated graphics support is not recommended vs Nvidia/AMD cards. Vulkan supports both Windows 10 + Linux, DX12 is only on Windows - time and effort wise, I see that it makes sense to put more effort into finding bugs on Vulkan

Updated 27/03/2017 01:05 8 Comments

Change the name before v2.0.0 released?


In past Firefox add-on was named IPFS Gateway Redirect.

It is an understatement, because in addition to redirect functionality, v2.0.0alpha2 features IPFS node status monitoring, experimental dnslink support, web+ipfs:// protocol support and quick upload options for local and web content.

I feel it may be a good time to pick visually shorter and more broad name.

Please vote using emoji reaction below or suggest something more elegant in comments below (one per comment, to make it easy to vote with 👍 reactions) 🙃

👍 → change name to IPFS Companion 👎 → keep the old name IPFS Gateway Redirect

Updated 26/03/2017 19:32 1 Comments

HTTP API enhancements for multi-tenant deployment



  • OS: n/a
  • Erlang/OTP: n/a
  • EMQ: n/a


We deploy EMQ in a multi-tenant environment and now are in a need of APIs to be filtered based on username. I understand that we have a filter based on client id, however our tenants are separated based on username.

Changes to the following endpoints make sense for a multi-tenant deployment:

  • /api/stats
  • /api/clients
  • /api/session
  • /api/topics
  • /api/routes
  • /api/subscriptions
Updated 27/03/2017 01:08

Clustering Express Instances (not sure)


Type of issue: (feature suggestion, bug?)

Feature suggestion


I don’t really know if this fits here as I am currently trying to learn JavaScript myself. I’ve read that you can make usage of multiple CPU cores by clustering your express instances ( I don’t know if that’s still a good thing to do or if that’s outdated information, but if it’s still a valid technique then why not include it?

Updated 26/03/2017 09:37 1 Comments

(Proposal) Extension properties and indexers


Normally, IsXyz should be a boolean property. As “extension properties” are not allowed this is implemented as extension methods, what breaks the appearance a bit.

I propose to introduce extension properties: ```C# // code originally from Roslyn/TypeSymbolExtensions.cs // currently implemented as methods public static bool IsIntrinsicType[this TypeSymbol type] { get => type.SpecialType.IsIntrinsicType; }

    public static bool IsPartial[this TypeSymbol type]
            var nt = type as SourceNamedTypeSymbol;
            return (object)nt != null && nt.IsPartial;

    public static bool IsPointerType[this TypeSymbol type]
        get => type is PointerTypeSymbol;
The method characterising parantheses are just replaced by brackets.
I think it will not break the overall appearance of properties and indexers, because for indexers the index is also passed as parameter within brackets, and it will also allow for extension indexers

public int this[this ExtendedType type, int index] { get => {NOP}; set => {NOP}; } ```

Updated 26/03/2017 22:34 1 Comments

TODOs for ARC2017



State TODO Detail Creator Assignees
done 既知物体(40個+背景)の物体セグメンテーション #2028 @wkentaro @wkentaro
ongoing 新しい棚の作成 #2017 @knorth55 @knorth55
ongoing 新グリッパ(Gripper-V5)の作成 #2036 @pazeshun @pazeshun
na Gripper-V5での把持計画(pick) #2038 @pazeshun @pazeshun
na Gripper-V5での把持計画(stow) #2043 @knorth55 @knorth55

Not Critical

TODO Detail Creator Assignees
透明な棚など認識や動作に有効な棚の作成 @k-okada
近接センサを搭載したグリッパによる手探り動作 近接センサ @pazeshun
煩雑環境での既知物体セグメンテーション #2028 @wkentaro
短時間(~45分)で新規物体を認識可能にする手法の開発 #2037 @wkentaro

2017-03-25T18:30 Updated by @wkentaro 2017-03-25T20:28 Updated by @pazeshun 2017-03-26T11:59 Updated by @knorth55 2017-03-26T13:36 Updated by @knorth55 2017-03-27T03:11 Updated by @wkentaro

Updated 26/03/2017 18:11 5 Comments



Milestone v4.0.0

  • [ ] Drop gulp dependency
  • [x] Drop coffee script support
  • [ ] ES 2015/2017, either use Node v4 base or babel build
  • [ ] Cross platform CI (Linux and Windows)
  • [ ] Drop support for Node v0.12 or below
  • [ ] Better documentation
    • [ ] Contribution Docs
    • [ ] Section in
    • [ ] Explain available QueryGenerator functions like bulkInsert etc and how to properly use them
    • [ ] FAQs
  • [ ] Proper error codes
  • [ ] Association support
  • [ ] Optional
Updated 25/03/2017 09:14

Context -bound nameof() operator


@comdiv commented on Wed Nov 25 2015

For now:

// it's legal code
public const string S = nameof(S); 

public void Method() {
   //this is not
     var _s = nameof(_s);

In second case it fail with “Cannot use local variable ‘_s’ before it is declared ”.

I understand that problem is scope and renaming - right side knows outer scope.

My suggestion - add context-bound nameof() operator - for example nameof(var), nameof(void) and nameof(class). Checked - for now all this forms are compile error - so they can be used.

namespace A{
class X {
    protected string fieldname = nameof(var); // should be "fieldname "
    protected string declname = nameof(class); //sould be "A.X"
    protected string tname = nameof(this); // "A.X" here
    protected string bname = nameof(base); // "System.Object" here
    virtual void Print () {
        var _x = nameof(var); // should be "_x"
        var y = nameof(void); // should be Print
        Console.WriteLine( $"{fieldname} {declname} {tname} {bname} {_x} {y}");
class Z : X {}
new X().Print();
// cout:   fieldname A.X A.X System.Object _x Print
new Z().Print();
// cout:  fieldname A.X Z.X A.X _x Print

from my point of view:

keywords and it's implementation

nameof(class) - always name of DECLARATION class 
     inplace substitute with string literal

nameof(this) - name of current object instance class 
    if in static context - same as nameof(class), if not - 
    add `private static @____this_name =    nameof(class)` 
    in all classes where 1) nameof(this) is in compile body 2) if base class has such 
    field (mean that it's possible that nameof(this) is in ancestor's methods)
    and then replace `nameof(this)` with `@____this_name`

nameof(base) - name of base class of current - 
    same as `nameof(this)` but with referencing parent class 

nameof(void) - name of closest Method in scope - 
    simply lookup closest Method declaration. So it will be work in lambda too:
        public void X() {
            var t = new {1,2,3}.Select(i=>nameof(void)+i);// it's still X1, X2, X3
alternate to nameof(void) can be nameof() - with empty braces
But if it's in property it should reference Property name:
    public string X {
        get { return nameof(void);} // "X"
    so nameof() is looks like better syntax than nameof(void)

nameof(var) - name of left side in initializers (variables,fields, properties)
        lookup for closest initialize variable operator and take it's name part
    if it's not variable initializer - it lookup for Property or Field initializer and get name from it

Profit: It’s better for refactoring without tools - we not require replacing in several places. It’s allow to reference left side variable It’s good for logs, exception formating - code became more readable and avoid usual copy-paste errors that are common for such code parts:

void Op1(){
    log.Debug($"start {nameof(Op1)}");
    log.Debug($"finish {nameof(Op1)}");
void Op2(){
    log.Debug($"start {nameof(Op2)}");
    log.Debug($"finish {nameof(Op1)}"); //uuupss

but if nameof(void) will be added:
void Op1(){
    log.Debug($"start {nameof(void)}");
    log.Debug($"finish {nameof(void)}");
void Op2(){
    log.Debug($"start {nameof(void)}");
    log.Debug($"finish {nameof(void)}");

It’s good for code generation - we not require control all names if just require generate some logging / serialization logic It’s good in custom serialize-deserialize while nameof(var) can be used

It’s not breaking change.

@alrz commented on Wed Nov 25 2015

C# Design Meeting Notes for Mar 4, 2015

This works the same as with any other construct, i.e.: not. This is not a special case for nameof, and it doesn’t seem worth special casing to allow it.


@comdiv commented on Wed Nov 25 2015

nameof is operator to kill doubling, ambiguity and magic-strings - think it should be extended

@m0sa commented on Thu Feb 04 2016

+1 would love to see this

@jrmoreno1 commented on Mon Feb 08 2016

I found this by looking to see if someone had already reported a need for nameof(void) – I have several projects where that would make adding properties much easier; the property references a resource that is supposed to have the same name (config file or resources), I could then use the same code for all of them, just changing the property name and it would work. While nameof(property) is better than “property”, it is still a bit fragile because of copy/paste/edit (or rather not editing).

@comdiv commented on Wed Feb 10 2016

+1 resource idea is very good, I think - it can be even with localization support.

@jrmoreno1 commented on Wed Jul 20 2016

An additional usage that might be beneficial (encountered just today) is referencing a parent class or namespace – but I can’t think of a good syntax for it.

Update: on thinking about it, a good syntax might be Console.WriteLine(NameOf(void, Namespace); Console.WriteLine(NameOf(void, Class); Console.WriteLine(NameOf(void, Method); Console.WriteLine(NameOf(void, Property); Console.WriteLine(NameOf(void, Parent); // For nested class Console.WriteLine(NameOf(void, Base); // For class that this class inherits from

Possibly even Console.WriteLine(NameOf(void, Type);

Updated 26/03/2017 04:22 1 Comments

Proposal: Implicit Interfaces


@gregsdennis commented on Tue Apr 21 2015

<div> <div>

<h3>The Problem</h3> Define an interface in such a way that its entire purpose is to represent a set of other interfaces.

Suppose you have two interfaces: <pre><code>interface IA { void SomeMethod(); }

interface IB { void SomeOtherMethod(); } </code></pre> Now suppose you want to create a property on an object into which you can place an object of either type. Since they have no common ancestry, you would have three options (that I’ve seen, you may devise your own):

<ul> <li>declare the property as <code>object</code> and then test/cast in order to access the functionality <pre><code>class MyImpl : IA, IB { public void SomeMethod() { … } public void SomeOtherMethod() { … } }

class MyClass { public object MyProp { get; set; } }

class MyApp { static void Main(string[] args) { var myClass = new MyClass { MyProp = new MyImpl() }; ((IA)myClass.MyProp).SomeMethod(); ((IB)myClass.MyProp).SomeOtherMethod(); } } </code></pre> </li> <li>create a third interface defined as the combination of the two <pre><code>interface ICombined : IA, IB {}

class MyImpl : ICombined { public void SomeMethod() { … } public void SomeOtherMethod() { … } }

class MyClass { public ICombined MyProp { get; set; } }

class MyApp { static void Main(string[] args) { var myClass = new MyClass { MyProp = new MyImpl() }; myClass.MyProp.SomeMethod(); myClass.MyProp.SomeOtherMethod(); } } </code></pre> </li> <li>create a proxy type which exposes a single field via independent properties <pre><code>class MyProxy<T, TA, TB> where T : TA, TB { private object _value;

public TA AsA { get { return (TA)_value; } }
public TB AsB { get { return (TB)_value; } }

public MyProxy(T value) { _value = value; }


class MyClass { public MyProxy MyProp { get; set; } }

class MyImpl : IA, IB { public void SomeMethod() { … } public void SomeOtherMethod() { … } }

class MyApp { static void Main(string[] args) { var myClass = new MyClass { MyProp = new MyProxy<MyImpl, IA, IB>(new MyImpl()) }; myClass.MyProp.AsIA.SomeMethod(); myClass.MyProp.AsIB.SomeOtherMethod(); } } </code></pre> </li> </ul> The second option is generally the more preferred option, however, it’s not always doable. What if, instead of <code>IA</code> and <code>IB</code>, we use <code>IComparable</code>and <code>IConvertible</code>? <pre><code>interface ICombined : IComparable, IConvertible {}

class MyImpl : ICombined { // IComparable implementation public int CompareTo(object obj) { … }

// IConvertible implementation
public int ToInt32() { ... }


class MyClass { public ICombined MyProp { get; set; } }

class MyApp { static void Main(string[] args) { var myClass = new MyClass { MyProp = new MyImpl() }; var comparison = myClass.MyProp.CompareTo(new object()); var newInt = myClass.MyProp.ToInt32(); } } </code></pre> This only works for classes which specifically implement the <code>ICombined</code> interface. You would not be able to assign types like <code>int</code>, <code>double</code>, and<code>string</code>, each of which implement both <code>IComparable</code> and <code>IConvertible</code>. <h3>The Solution</h3> We introduce a new usage of the <code>implicit</code> keyword for interfaces. <pre><code>implicit interface ICombined : IComparable, IConvertible {} </code></pre> This tells both the compiler and the run-time that any class which implements both <code>IComparable</code> and <code>IConvertible</code> can be interpreted as implementing <code>ICombined</code>.

The remainder of the code could stay the same, but now, in addition to explicit implementations of <code>ICombined</code> we could also assign any type which implements both <code>IComparable</code> and <code>IConvertible</code>, including <code>int</code>, <code>double</code>, and <code>string</code>. <pre><code>class MyApp { static void Main(string[] args) { var myClass = new MyClass { MyProp = 6 }; var comparison = myClass.MyProp.CompareTo(new object()); var newInt = myClass.MyProp.ToInt32();

    myClass.MyProp = "example";
    var newComparison = myClass.MyProp.CompareTo(new object());

} </code></pre> Additionally, you could use this new interface to define a collection of such objects: <pre><code>var list = new List<ICombined> { 6, “string” }; </code></pre> Defining an interface this way, it becomes retroactive. That is, types which implement all base interfaces for the implicit interface also are said to implement the implicit one. <h3>The Rules</h3> <ol> <li>An implicit interface may combine any number of interfaces.</li> <li>An implicit interface may not define any additional functionality. That is, it must be empty.</li> </ol> That’s really it.

Finally, the run-time will have to do some type checking, which it should do already for the <code>is</code> and <code>as</code> keywords. It wouldn’t need to know <em>all</em> implicit interfaces that a type implements, it would just need to check as requested. <pre><code>var implemented = 6 is ICombined; </code></pre> This basically asks, “Does the type of <code>6</code>, which is <code>int</code>, implement <code>ICombined</code>?” To determine that, it sees that <code>ICombined</code> is an implicit interface so it asks, “Does it implement all of the interfaces implmented by <code>ICombined</code>?” So it’s equivalent to writing: <pre><code>var implemented = 6 is IConvertible && 6 is IComparable; </code></pre> Simple field and property assignments would be compiler-verifiable. </div>



@HaloFour commented on Tue Apr 21 2015

The compiler might be able to sneakily handle that within the code that it compiled itself but it couldn’t affect how other assemblies might treat that type. To make the CLR recognize int as an ICombined would require changes to the runtime.

You might want to submit this as a feature request to the CoreCLR team.

@dsaf commented on Wed Apr 22 2015

Is it related to intersection types?

@gregsdennis commented on Wed Apr 22 2015

@dsaf, not really, but I can see the proximity to that idea.

The concept behind this is closer to type verification by implementation. I remember reading about a feature that some languages have where the class is defined not by some concrete Type concept, but rather by the functionality that the class provides. By this, a class can be considered to implement an interface simply by implementing the contract it states (whereas C# requires you to explicitly state you’re implementing the interface, even if the contract happens to match some other interface).

This could considered be a partial implementation of that feature, but still keeping to the ideals which already exist in C#.

@gregsdennis commented on Wed Apr 22 2015

@HaloFour, I was going to say that the CLR wouldn’t have to change, citing that

var implemented = 6 is ICombined;

could be converted by the compiler into

var implemented = 6 is IConvertible && 6 is IComparable;

before translating to IL.

However, now that I think about it further, it would still need to have a type for any variable which holds an ICombined value. Otherwise, it would have to use object and inject a lot of casts to switch between the various interfaces when it needs to process the value. That could lead to other issues.

Regarding use in other assemblies, those assemblies would have to reference this one to get the type definition of ICombined. How would this be a problem for those assemblies?

@HaloFour commented on Wed Apr 22 2015

You could handle this via a helper method that does some reflection (obviously caching the results). I actually had some incubator project some years back to provide a duck-typing helper method that would construct a concrete proxy class in a dynamic assembly that would implement the interface members and point them to compatible members of the target type. It worked nice but it had the same issues where the proxy type isn’t the same as the target type and you can’t convert between them. The compiler would have the same issue. If it emitted a synthetic type the instance is no longer the original type. You couldn’t pass that ICombined to another method and cast it to an int.

@bondsbw commented on Wed Apr 22 2015


An implicit interface may not define any additional functionality. That is, it must be empty.

In that case it might make more sense to remove the braces:

implicit interface ICombined : IComparable, IConvertible;

@gafter commented on Fri Nov 20 2015

I wonder if intersection types would be a better fit, since there is little reason to give a name to the combination.

@benaadams commented on Sat Nov 28 2015

Have lots of these types in that require naming for ease of use.

@Thaina commented on Wed Mar 09 2016

I would prefer that we could alias and cast or check for a group of type than making new implicit interface

For solving your problem it would better if we just

using ICombined = (IConvertible,IComparable); 
if(obj is (IConvertible,IComparable))
   return obj as (IConvertible,IComparable);
var list = new List<(IConvertible,IComparable)>();

Syntax is (type,type) or maybe (type & type) and (type | type)

But it would require CLR support

@gregsdennis commented on Wed Mar 09 2016

@Thaina, I think your idea is good. I like the idea of declaring the shorthand in the using statement.

I think the (type | type) syntax would explain the concept better in code. It is literally “this or that type.”

@HaloFour commented on Wed Mar 09 2016

The (A, B) syntax is proposed for tuples. I like (A & B) since the intersected type would have to implement both interfaces. The concept of (A | B) is interesting, a type that implements either interface and the consumer can only access shared members. Java does have limited typing like this in exception handlers, e.g. catch (IOException | SQLException exception) { }.

I wonder how much of either concept that the compiler could implement without CLRsupport. There are tricks that the compiler could do within a method, but it would have to be pretty limited.

@gregsdennis commented on Wed Mar 09 2016

@HaloFour, you are correct: the desired syntax for this should be (A & B). (Early morning; brain not booted.)

@aluanhaddad commented on Thu May 12 2016

Intersection types would add a great deal of expressiveness to the language. I really want to see this implemented and I’m really happy to see renewed interest. This would allow for much more powerful and expressive consumption of generic APIs and allow for powerful, typesafe ad hoc cross axial classification of types. I think the & syntax is a good choice and has worked out very well in languages like TypeScript.

As for the question of declaring a name for some intersection type, I think the using syntax would provide that intuitively.

@Thaina commented on Sun Jun 05 2016

I have seen many proposal about intersect/union type at where clause. And I think it all could be relate to trait too

Should we have some thread or tag to sum up all these kind of proposal as generic enhancement or type constraint ?

@aluanhaddad commented on Fri Jun 24 2016

@Thaina the problem is that while

where T: IComparable, IEquatable

works very well already for specifying intersecting type constraints, it is impossible to call such a method (with resorting to dynamic) without casting to a type implementing both interfaces. Such a definition may not be available, and if it is it is still not ideal. I would welcome a thread gathering ideas for these proposals, but it is about much more than specifying generic constraints. Basically, Union Types would be very useful in generic constraints. Intersection types would be very useful for consuming generic methods and for pattern matching.

Updated 25/03/2017 01:26 3 Comments

MinPy next step prototype


Need input from you guys! Especially ppl from MXNet team. We have been discussing this already for a while so background might not be clear to you guys.

Our goal

Integrate JIT and autograd with MXNet/MinPy python interface. Autograd integration is already on the way. We are proposing a uniform interface for both JIT and autograd.

For JIT, we cache user operations, and evaluate them lazily (only when user requests a print or asnumpy). By doing this, we can optimize computing sequence, and cache them for future use. It functions as a layer between Python/user code and NNVM/engine code.

As an example, user might have code in a tight loop. The graph structure generated in the loop are the same between iterations. In the first iteration, we optimize this computing sequence so that in future rounds we may use the optimized computing sequence to do calculation on different data. We need a way to detect and cache graph structure. That’s the intention of this proposal.

for _ in range(interations):
    with minpy.jit():
        # code in loop

The boundary of JIT is defined by the context of minpy.jit and strict evaluations (print or asnumpy). Operations between boundaries are sent as whole to NNVM for graph optimization. Computing sequences are cached so each different structure is optimized only once.

For example, user might write with minpy.jit(): a = a + 3 a = a * 4 a = a / 100

In this case, three element-wise operations could be merged into one. The first time we encounter this code, we send the computing sequence + * / to NNVM for optimization. The second time, we look up our cache and run the optimized computing sequence instead.

There are many more corners cases, including those where JIT interact with autograd. Please refer to this gist for a proof-of-concept written in Python.

Implementation proposal

We intend to write the code in C-api directly, alongside NDArray functions and methods.

Header file is here. We need to intercept MXImperativeInvoke. Instead of calling underlying functions directly, we place them in our sequence buffer. By placing the function and its arguments, we assure the involved arrays are properly referenced and not freed prematurely. At a later stage, when JIT boundary is encountered, we flush the sequence buffer and push it to engine/NNVM. We achieve lazy evaluation in this way. A similar approach goes for autograd operations. When gradient sequences is calculated, we push them into the JIT queue so they can also get optimized. A sample (not complete) implementation code is here.

@mli @tqchen @piiswrong @jermainewang @HrWangChengdu @ZihengJiang @lryta @sneakerkg @zzhang-cn

Updated 27/03/2017 04:46 4 Comments

Proposal: Allow Lambda as Argument to an Attribute


@sirisian commented on Sat May 02 2015

Essentially allow lambdas as arguments to attribute constructors.

Code example. Specifically the “[Converter(s => s == "one” ? 1 : 0)]“ line and "public ConverterAttribute(Func<string, int> converter)” in the constructor.

Possible use cases I’ve seen from years ago are converters for serialization libraries and alternative ways to define validator functions.

using System;
using System.Linq;

namespace AttributeTest
    public class ConverterAttribute : System.Attribute
        public Func<string, int> Convert
        public ConverterAttribute(Func<string, int> converter)
            this.Convert = converter;

    class BaseItem
        public int GetTotal()
            return this.GetType().GetProperties().Select(propertyInfo => ((ConverterAttribute)propertyInfo.GetCustomAttributes(typeof(ConverterAttribute), true).First()).Convert((string)propertyInfo.GetValue(this))).Aggregate(0, (input, value) => value + input);

    class DerivedItem : BaseItem
        [Converter(s => s == "one" ? 1 : 0)]
        public string Foo

    class Program
        static void Main(string[] args)
            var convertedItem = new DerivedItem();
            convertedItem.Foo = "one";

@HaloFour commented on Sat May 02 2015

C# is limited by the CLR in the types of the parameters that can be used for custom attributes. Those values need to be easily and predictably serialized/deserialized as they are embedded as a BLOB directly in the assembly metadata and reconstructed by the run time. The compiler could probably emit anything it wanted in the BLOB but the CLR itself can only understand and deserialize the integral types, bool, char, string, Type (serialized as a string of the type name) and arrays of those types.

Given that, how would a lambda be embedded? They can’t really be serialized, even if the limitation on the CLR could be lifted. Only thing I could think of is that the compiler generates a synthetic public type with a public static method and then serializes the method runtime handle. The attribute itself would have to be defined in a way that the property is really of long so that the CLR would deserialize the value of the method handle which would be converted to a RuntimeMethodHandle and MethodBase.GetMethodFromHandle called to obtain the MethodBase which is then converted into a delegate, but how to make that seamless sounds messy.

@MaulingMonkey commented on Wed Mar 15 2017

“Only thing I could think of is that the compiler generates a synthetic public type with a public static method”

This is basically how lambdas already work. For:

using System;
using System.Collections.Generic;
using System.Reflection;

static class Program
    public static void Main(string[] args)
        var l = new List<int>() { 1, 2, 3 };
        l.RemoveAll(i => i%2 == 0);
        foreach (var i in l) Console.WriteLine(i);

Roughly the following is generated: ```cs private class <>c { public <>c() {} internal bool <Main>b__0_0(int i) { return i%2 == 0; }

public static <>c <>9 = new c<>();
public static Predicate<int> <>9__0_0 = new Predicate<int>(<>9.<Main>b__0_0);

} ```

Multiple lambdas may be collapsed into a single class (depending on if they share the same captured state? - none in this case…)

System.Runtime.Serialization.Formatters.Binary.BinaryFormatter can currently (de)serialize delegates - presumably by persisting the object instance and method name or similar, although this is potentially rather brittle. A less general purpose method that might work here (by virtue of being unable to capture local method state when constructing attributes, and thus always being able to generate a static delegate instance to reference) would be to simply persist the name of the static field (“<>c.<>9__0_0”).

Updated 25/03/2017 00:11

Feature request: Better way of treating null as empty enumerable in foreach loops


@rickardp commented on Wed Nov 04 2015

Consider the following loop:

IEnumerable<SomeType> myEnumerable = ...

foreach(var item in myEnumerable ?? Enumerable.Empty<SomeType>) {

which will default to an empty enumerable in case myEnumerable above is null. This becomes quite awkward when SomeType is a rather complex type (Like Dictionary<string, Func<Tuple<string, int>>>), or even impossible if the type is anonymous.

One way this can be solved right now is to create an extension method, e.g.

public static IEnumerable<T> EmptyIfNull(this IEnumerable<T> enumerable) {
   return enumerable ?? Enumerable.Empty<T>();

which is not bad at all actually (though still not built into LINQ AFAIK), but I think that it would be better still if there was a way similar to the null-propagation operators (?. and ?[), e.g. (hopefully with a better syntax)

foreach?(var item in myEnumerable) 

I understand that constructing the syntax for a solution to this problem can be tricky, but I think it is at least worth considering since I believe the case is quite common (I find it a lot more common than the null-propagation indexer operator in my code).

Why using null at all and not empty enumerables everywhere? One example where I find the scenario common is in serialization of objects, where empty collections are omitted and should therefore semantically be identical to null. Since default(IEnumerable<>) is null, this means that null will be returned when the key is not found. This would also add symmetry with null propagating index operators.

@dsaf commented on Thu Nov 05 2015

The question mark should probably be next to enumerable rather than the keyword? Although there might be some ambiguity I guess…

foreach (var item in myEnumerable?)

@vladd commented on Fri Nov 06 2015

Well, in my opinion the empty enumerable should be distinguishable from null enumerable object, so the two should not behave in the same way. In particular, the whole LINQ world (.Select. .Where, .Concat etc.) throws on null, but works correctly with empty sequence. Making enumerating null a special case would create inconsistency between the language and the standard library.

(If you encounter often nulls returned from some API, meaning empty sequences, maybe it’s API what should be corrected?)

@rickardp commented on Sat Nov 07 2015

@vladd I think your point is very valid, and in this way C# differs from, for example, Objective-C where a nil array is essentially the same as an empty array.

The reason that I think this would be helpful is not helping to deal with broken APIs but rather interaction with dynamic languages and objects, generics where default(T) is returned e.g. when a key is not found. To give a concrete example, consider some JSON parser generated from C# classes, parsing configuration data.

public class Configuration 
    public List<string> LogFiles { get; set; }

Then, to handle the case where the user might have omitted the LogFiles key entirely, or provided an empty array, you are doing something like this:

if (config.LogFiles != null)
    foreach (var logFile in config.LogFiles)

I tend to see this pattern a lot (in my code and others') especially when it comes to dealing with data serialization and interaction with external programs.

For LINQ and index operators we already have null propagation that helps with null checking. I think it is still worth considering some kind of symmetry with foreach since that is such a useful construct.

@bbarry commented on Sat Nov 07 2015

I don’t know if it is particularly useful but you can always do:

myEnumerable?.ToList().ForEach(item => ...); // .ToList() may be unnecessary


myEnumerable?.AsParallel().ForAll(item => ...);

@aluanhaddad commented on Fri Nov 13 2015

@rickardp This is interesting. Most of the time, returning null for collection types likely indicates a badly designed API. The serialization point however, is worth examining in more detail. I also run into this issue/question regularly. I think the correct way to handle it is often to customize the serialization/deserialization to impose the semantics you desire. If you are writing the serialized classes, you can initialize collection properties to empty.

That said, the question of whether empty and null enumerables should be treated as semantically equivalent is just as relevant in the context of serialization as it is in any other. That an object has been serialized does not imply that these semantics should change. For example, as someone who does both frontend and backend development on a daily basis, I would rather receive an empty array than nothing when I am deserializing JSON in JavaScript.

@rickardp commented on Sat Nov 14 2015

@aluanhaddad Good points! Though I often find myself in a situation where this design is out of my control. Some code may be generated, or some tradeoffs might have been done for performance reasons (e.g. large data sets on mobile devices), and in any case that piece of code might not be maintained by me. IMHO one of the greatest strengths with C# is its ability to deal with real-world situations while still keeping your own code clean and maintainable.

Even as we are embracing the semantic difference between null and empty (e.g. where null means “not specified”), the sensible default behavior might still be to not traverse the collection when it is null in many (not all!) situations. After all, I suppose this is why the null-propagating operators were so much requested in the first place.

@aluanhaddad commented on Sat Nov 14 2015

While the ?. operator is wonderful, I worry that providing a syntactic sugar specifically for handling possibly null enumerables, will encourage people to return null instead of empty, and thus also encourage people to use the new null safe enumeration syntax everywhere.

I think that the combination of the ?. operator and #5032, if adopted in its current form, will effectively make nullability the safe and syntactically supported way to represent optional values. If we imagine a C# where values are (assumed to be) non-nullable by default, then nullability and the ?. operator effectively become C#’s Maybe/Option monad. However, while this makes sense for scalar values, I don’t think it makes sense to use it for optional aggregate values because we can think of a nullable type as a conceptual Option<T>. Then, if we think of Option<T> as essentially a trivial collection which contains either 0 or 1 elements, it suggests that aggregate values do not likely need to be wrapped further. You can implement Option<T> as a library, providing an extension method to wrap possible null values, and then write queries over the optionals, but I digress.

At any rate, your extension method

public static IEnumerable<T> EmptyIfNull(this IEnumerable<T> enumerable) {
   return enumerable ?? Enumerable.Empty<T>();

actually handles the uncertainty quite well without the need for additional syntax.

@yaakov-h commented on Sat Nov 21 2015

Considering how often something like this is needed, what’s the value of returning null? i.e., what’s the semantic difference between null (there is no collection of items) versus an empty enumerable (there is no items in the collection)? In what case would a function or property that returns both be valid?

@paulomorgado commented on Mon Nov 30 2015

@yaakov-h, the difference is the difference between and empty box (empty collection) and no box (null).

If you have no box, you can assume you have no items. But that’s a particular interpretation of what you having no box means. There might be a box but hasn’t been handed to you yet. You cannot tell the color of a box if you have no box. Nor its size/capacity. You can fill an empty box, but you can’t fill no box.

@NickCraver commented on Sun May 29 2016

The confusion here I see is for new devs hitting null reference exceptions here. It’s just not expected, they don’t know that .GetEnumerator() is called underneath and it’s a compiler implementation detail that they’re hitting which isn’t very obvious. I don’t want to allocate a new collection of X every time just to not throw, and the extension method can only be applied to generic collections, not those of older IEnumerator (non-generic) types.

In almost all null cases, I simply don’t want to loop if there’s nothing to loop over. Today, that’s a wrapper:

if (myCollection != null) 
    foreach (var i in myCollection) { }

It’d be nice to have a cleaner syntax for this all around, as we have tons of these if statements. I strongly disagree this is automatically evidence of a bad API design, it often happens completely inside a method and with serialization as noted above. On the serialization front: we use protocol buffers which makes no distinction between empty and null, so you get a null.

I think we can do something here to save a lot of wrapper code, though where the best place for a ? (or alternative) is, I’m not picky about. Eliminating an extra 3 lines of code around many foreach statements would be nice, though. To be fair, I don’t think this will help the new developer case much at all. They’re still going to write a foreach and get a null ref before figuring it out and learning alternative approaches. I can’t think of any new-syntax way to help, it still has to be discovered.

@gafter commented on Sun May 29 2016

I don’t want to allocate a new collection of X every time just to not throw, and the extension method can only be applied to generic collections, not those of older IEnumerator (non-generic) types.

It sounds like Enumerator.Empty<object>() is exactly what you need. It returns something that implements the non-generic older interface IEnumerator too, and it doesn’t allocate something on each call.

@paulomorgado commented on Sun May 29 2016

@gafter, I think @NickCraver is saying that the foreach statement should, instead of myCollection.GetEnumerator() it should be something functionally equivalent to myCollection == null ? Enumerator.Empty<TItem>() : myCollection.GetEnumerator().

But that would be a breaking change.

@NickCraver commented on Mon May 30 2016

@paulomorgado Oh no - I’m saying foreach? (or other syntax) should behave that way. I am in no way arguing for a breaking change - that simply won’t happen. Do I think foreach should have behaved this way? Yeah, probably…but that ship has sailed, we can’t change it now.

@HaloFour commented on Mon May 30 2016

public static class EnumerableExtensions {
    public static IEnumerable<T> EmptyIfNull<T>(this IEnumerable<T> enumerable) {
        return (enumerable != null) ? enumerable : Enumerable.Empty<T>();

@paulomorgado commented on Mon May 30 2016

@NickCraver, that was my understanding, but didn’t want to open the can of worms of ? on statements. What’s next? while?? lock??

@HaloFour, I still operate under the premisse that extension methods should look and behave like instance methods.

@NickCraver commented on Mon May 30 2016

@paulomorgado none of those constructs suffer from this issue, nor does using, switch, etc. - I don’t buy the slippery slope argument here. I can’t think of even one other block this applies. This is directly and specifically a result of the compiler implementation of foreach calling .GetEnumerator() underneath and unexpectedly throwing a null reference. The argument is simply for more terse handling of such a common case.

@aluanhaddad commented on Mon May 30 2016

the extension method can only be applied to generic collections, not those of older IEnumerator (non-generic) types.

You can define extension methods for non generic types.

The confusion here I see is for new devs hitting null reference exceptions here. It’s just not expected, they don’t know that .GetEnumerator() is called underneath and it’s a compiler implementation detail that they’re hitting which isn’t very obvious.

I think the problem is that the error message is expressed in terms of the compiler implementation, not that there is an error. Imagine if the error message read “Unable to enumerate null”, wouldn’t that clear things up?

If methods ignore the nullness of the enumerables they receive, merging the concept of null with empty, they are likely to propagate null in their return values. I think that would be harmful. Everyone would use the hypothetical new nullsafe foreach everywhere.

@NickCraver commented on Mon May 30 2016

You can define extension methods for non generic types.

Yes, if we want to define hundreds of extension methods. I was specifically talking about the generic extension methods proposed earlier, the only reasonable thing that may be added in almost all cases.

I agree the error message isn’t obvious, and clearing it up would help, but it still doesn’t address the usability and verbosity issue of wrapping everything in if (enumerable != null) { } today. It has little to do with propagating null, you choose whether to do that today and you’d choose whether to do it with a new syntax. I don’t buy the argument of it being a bad thing because it more easily enables you to do what you were going to do anyway. By using the new syntax, intent is clear. The same way it’s clear (and explicitly opted-into) with ?. in C# 6.

I don’t believe this enables any better or worse behavior than today’s verbose method. It simply allows developers to do what they want with less syntax, exactly as ?. does already.

I think that would be harmful. Everyone would use the hypothetical new nullsafe foreach everywhere.

Two things here: 1. This assumes that it’s harmful. Some people prefer to return null. We certainly do at Stack Overflow, we don’t want the allocation of an empty thing just to do nothing with it. 2. That’s a huge assumption with no backing. People don’t use ?. everywhere either - not even close. I think this is a very bad assumption to base any decisions on.

@HaloFour commented on Mon May 30 2016


I still operate under the premisse that extension methods should look and behave like instance methods.

That’s your decision, but my code is perfectly legal. It’s pretty convenient to be able to invoke extension methods off null specifically to handle easy/fluent checks like that. Note that throwing NullReferenceException when trying to invoke an instance method on null is specifically a C# limitation as a part of the language, the CLR is perfectly happy to allow it via the call opcode.

@yaakov-h commented on Mon May 30 2016

Note that throwing NullReferenceException when trying to invoke an instance method on null is specifically a C# limitation

Please explain which IL opcodes the C# compiler emits that causes this.

My understanding is that NullReferenceException is the CLR’s reinterpretation of a low-address access violation, and is not part of C#.

On 31 May 2016, at 8:27 AM, HaloFour wrote:

Note that throwing NullReferenceException when trying to invoke an instance method on null is specifically a C# limitation

@HaloFour commented on Mon May 30 2016


C# always uses callvirt when invoking instance methods, even if the method isn’t virtual. That opcode has the side effect of always throwing NullReferenceException if the instance is null, which satisfies C#’s language specification requirement:

If C# were to instead use the call opcode for instance methods then the CLR would allow them to be called and the this pointer would be null. You’d lose virtual dispatch on virtual methods in those cases, though.

@aluanhaddad commented on Mon May 30 2016

@NickCraver First of all I did not mean to come off as dismissing the value of returning null from a method which returns a collection, if it means improved performance in a critical path, then that is a perfectly valid reason to use it. My point is that I do not want to see the convenience of this foreach? style construct encourage conflation of null with empty.

With regard to performance, I think it is valid but, as @gafter pointed out, methods that return System.Collections.IEnumerable can delegate to System.Linq.Enumerable.Empty<Object>, so there is no allocation penalty.

With regard to extension methods, I’d be curious to see how many collection types you would need to define an extension method for. Hundreds does sounds very painful, but that is a lot of very specific types to be returning, especially if they are mostly used with foreach. Again, I don’t know your use case.

That’s a huge assumption with no backing. People don’t use ?. everywhere either - not even close. I think this is a very bad assumption to base any decisions on.

Well, all of my evidence is anecdotal, and thus should be disregarded, but I know many programmers who advocate defensive programming at the point of use. In other words, they are not confident in the API contract or it is insufficiently specified. In fact, it’s been suggested to me that I should check IEnumerables for null even when returned by internal types which I initialize with Enumerable.Empty

Anyway, if your code base is full of

if(values != null) 
    foreach (var value in values) { ... }    

I would assume you want to replace them all with foreach? (or whatever syntax it would be). That would certainly mean propagating use of the new syntax. Additionally, as you bring up the learning curve for new programmers, they will now have to ask themselves, why are there 2 foreach constructs? Which one do I use? And I predict, again without evidence, that the answer would often be “the safe one”. Which is safe for some definition of safe. And it would likely be the best choice when working in a code base full of collection returning methods that propagate nulls.

Edit: fixed formatting.

@NickCraver commented on Mon May 30 2016

@aluanhaddad How do you see this as any different than ?.Max() or ?.Where(), etc.? All of these debates applied to the safe navigation (?.) operator, and the community saw benefit over any perceived downsides. Have we seen devs go crazy with that and use it everywhere? I certainly haven’t, on the contrary: from what I’ve seen it’s very rarely used and almost always where appropriate. In fact the use cases I’ve seen like deserialized structures are very much in line with the use cases in this issue.

Would I use foreach? in the current if case? Absolutely. It’s a shorter way to get the same effect. It’s the same as ?. instead of nested ifs, { get; set; } instead of a full property with a declared backer, etc. I don’t see this as perpetrating anything “wrong”. Everything it does exists today, we just have to write more code to do it. Many additions over the past few versions of C# are exactly that: new syntax or constructs to make common cases more terse.

As for IEnumerable, there are tons of types in the BCL that are IEnumerable and not a generic version. Each would need its own extension method. There’s a lot of old business code started before generics also follows this pattern of a specific collection type as well, but luckily I don’t have that problem anymore. I have at past jobs, though.

I think the “programmers will do X” argument is moot in either case. If the argument is “they’ll just use foreach?”, then wouldn’t that same group just be adding an if wrapper today? That’s exactly what I’d do. Why can’t we make life easier? That’s what all new language features aim to do.

@aluanhaddad commented on Mon May 30 2016

I don’t see this as perpetrating anything “wrong”. Everything it does exists today, we just have to write more code to do it.

@NickCraver I certainly did not mean to come off as accusatory. I apologize.

I do think in general that it’s a poor practice to return null for values of type IEnumerable, but there are certainly exceptions, and I do not mean to judge. What I would like to avoid however, is the introduction of a new foreach construct.

@NickCraver commented on Tue May 31 2016

For IEnumerable? Yes sure, it’s easy to make the argument for not being null and there are zero allocations ways to achieve this. However, very often our return types are List<T>, Dictionary<TKey,TValye>, etc. - not the base IEnumerable<T>. Do you consider null invalid in those cases? We’d certainly rather not new up a collection just to throw it away, especially inside the same assembly and not on any public API. The same applies to null lists from deserialization as discussed earlier.

Also what about types that aren’t returned at all? In razor views for example you’re rendering the output and we have to put these if constructs wrapping everywhere as well. I think the “null return” is a narrow view of the much wider range of use cases. All of these arguments against (and reasoning for) mirror ?., which the community and C# team approved and serves as a tremendous time and code saver today.

To be clear: I’m not advocating specifically for foreach?, I’ve seen foreach(var i in? myEnumerable) and foreach(var i in myEnumerable?) proposed as well. Or, something else. I’m not all all picky about the syntax; there are many ways to improve over the extra 3 lines we require today.

@markrendle commented on Tue May 31 2016

foreach (var i in myEnumerable?) would seem to fit very nicely with the semantics of the existing null-safe operator, since it is applied to the variable rather than a keyword.

@aluanhaddad commented on Tue May 31 2016

However, very often our return types are List<T>, Dictionary<TKey,TValye>, etc. - not the base IEnumerable<T>. Do you consider null invalid in those cases?

I generally return these types through interfaces like IReadOnlyList<T>. If you are doing that, you can return a static field like the one returned by Enumerable.Empty<T> for the extension method on that type. If the collections are exposed via mutable interfaces, then you will need to check for null regardless if you want to make use of that interface. However, if you are only reading from them, why not enjoy the benefits, like IReadOnlyList<T>’s covariance, and generally easier to reason about code, that you can get from an immutable interface?

@markrendle I do find that the syntax foreach (var i in myEnumerable?) makes this more palatable. It is acting on the value, not appearing to introduce a special null aware keyword.

@NickCraver commented on Tue May 31 2016

@aluanhaddad The “why?” for most of those is you’re making a lot of assumptions about a codebase based on personal experience, which we all do. We only know what we’ve seen. However, you must realize many people don’t use those interfaces, don’t want to cast as those interfaces, and all of that complicates code in other ways.

If the collections are exposed via mutable interfaces, then you will need to check for null regardless if you want to make use of that interface.

No…I wouldn’t. That’s the entire point. I want to say “if null, skip it”, that’s what this entire issue is about. Also don’t assume read-only behavior, that’s a bad assumption. Language features and constructs cannot make assumptions like this, they have to handle the widest variety of cases and throw warning or errors for all the rest.

In general most comments here ignore performance. They miss the point of not wanting to call .GetEnumerator() at all if there’s nothing there. Using an empty collection is a loss in performance. You’ll not convince me to use empty collections everywhere - that’s just not how we do things here, and many places I’ve been. It’s not the fastest code you can write, a null check is. The assumption of using empty collections everywhere also assumes you always control both the input and output of every method. This is very rarely the case everywhere.

Why not use IReadOnlyList<T>? Because now I’ve arbitrarily (and drastically) restricted the methods available down to satisfy some requirement that was arbitrarily imposed. It also totally rules out the read/write case, of which we have many. The answer here should not be “rewrite all of your perfectly valid code and create more allocations and restrictions”. We can do better. This issue is about doing better.

@ghost commented on Tue May 31 2016

in? coalesces the pretty syntax + user intent, and it’s more C#-y than the other proposed options.

Updated 25/03/2017 00:10

Create profile page


Showable AND changeable/removable: * Username * Password * Email * Statistics (+ clear button) Only Showable (more or less^^) * Registration date? (Would require extension of the user database) * Statistics * Account removal button

Updated 24/03/2017 21:27

Discussion: Code Generator Catalog


@alrz commented on Fri Dec 30 2016

Here is an incomplete list of potential use cases for code generators to explore ideas and discussion. I believe this will help to shape generator APIs and language features around it.

1. NPC implementation (huh)

partial class Person
  public string FirstName { get; set; }
  public string LastName { get; set; }

// generated
partial class Person : INotifyPropertyChanged
  private string _firstName;
  public replace string FirstName
    get => _firstName;
      if (_firstName != value)
         _firstName = value;

Also, if a property depends on others, we could also raise additional events by inspecting its accessor. ```cs [NPC] public string FullName => this.FirstName + “, ” + this.LastName;

// generated public replace string FirstName { get => firstName; set { if (firstName != value) { _firstName = value; OnPropertyChanged(nameof(FirstName)); OnPropertyChanged(nameof(FullName)); } } } Note, #850 can lead to simpler code generation i.e does not need to figure out a name for backing field: cs public replace string FirstName { string field; get => field; set { if (field != value) { field = value; OnPropertyChanged(nameof(FirstName)); OnPropertyChanged(nameof(FullName)); } } } Or the type (#8364): cs public replace string FirstName { get; set { if (field != value) { field = value; OnPropertyChanged(nameof(FirstName)); OnPropertyChanged(nameof(FullName)); } } } `` Note: If none ofreplaced members calloriginal` the original declaration should be removed as the backing fields are no longer being used.

2. Dependency Injection

Dependency containers commonly depend on reflection to create objects. This could be moved to compile-time via code generators. In this scenario, there should be a way to parametrize the code generator to switch between implementations. e.g. mocks. (attributes does not belong to MEF). cs [Export] class Service1 : IService1 { [Import] private IService2 Service2 { get; } }

Note that this will be only useful to manage object lifecycle. If implementations come from outside of assembly boundaries we should probably fallback to reflection under the hood.

3. Caching / Lazy Initialization

public string Property => ComputeProperty();

// generated
private string _property;
public replace string Property => _property ?? (_property = original);

4. Memoization

public string Function(string arg) { ... }

// mind you, a simple demonstration without thread-safety
private readonly Dictionary<string, string> _fCache = new();
public replace string F(string arg)
  => _fCache.TryGetValue(arg, out var result) ? result : _fCache[arg] = original(arg);

5. Dependency Properties

partial class Foo : DependencyObject
  [PropertyMetadata(defaultValue: string.Empty)]
  public string Name { get; set; }
  public int Size { get; }

// generated
partial class Foo 
  public static readonly DependencyProperty NameProperty =
    DependencyProperty.Register(nameof(Name), typeof(string), typeof(Foo), new(string.Empty));

  public replace string Name
    get => (string)GetValue(NameProperty);
    set => SetValue(NameProperty, value);

  internal static readonly DependencyPropertyKey SizePropertyKey =
        DependencyProperty.RegisterReadOnly(nameof(ReadonlyNam), typeof(string), typeof(Foo));

  public static readonly DependencyProperty SizeProperty = SizePropertyKey.DependencyProperty;

  public replace int Size
    get => (int)GetValue(SizeProperty);
    internal set => SetValue(SizePropertyKey, value);

Note, it would be nice to be able to inspect property initializer and use it in the generated code, e.g. cs public string Name { get; set; } = string.Empty;

6. ORMs

Currently ORMs) use runtime code generation (NH) or proxies (EF) to enable change tracking and lazy loading in POCOs. They could ship with a code generator to move this procedure to the compile-time. cs class BlogPost { public string Title { get; set; } public string Body { get; set; } public List<Comment> Comments { get; } }

7. Mixins

It’s possible to implement member delegation as an analyzer but if we want to delegate all members, perhaps a code generator is more likely preferable. ```cs partial class Class { [Mixin] private readonly ISomething _something = new Something(); }

// generated partial class Class : ISomething { public void DoSomething() => _something.DoSomething(); } ```

8. Double Dispatch

This could be used to implement visitor pattern or a simple double dispatch:

public object F(T x) { .. }
public object F(U x) { .. }
public extern object F(Base p);

// generated
public object F(Base p)
    case T x: return F(x);
    case U x: return F(x);
    default: throw ExceptionHelper.UnexpectedValue(p);

You could use a similar technique to generate semi-virtual extension methods. Since this is generated by a generator, you are free to handle the failure case differently.

Generators should be able to produce diagnostics if target members are malformed, e.g. a method instead of a property.

9. Duck Typing

interface IDuck
  void Quack();

class A { public void Quack() {} }

void F(IDuck d) { }

F(new A().Wrap());

// generated
static class Extensions
  class WrapperA : IDuck
    private readonly A _obj;
    public WrapperA(A obj) => _obj = obj;
    public Quack() => obj.Quack();
  public static IDuck Wrap(this A obj) => new WrapperA(_obj);

Though, #11159 + #258 = #8127 can greatly improve this scenario in terms of perf and easier code gen.

Note: An assembly attribute could be used to annotate exterior types, cs [assembly: Duck(typeof(IDuck), typeof(AnotherAssembly.B))]

10. Type providers for xml, json, csv, sql, etc

F# type providers can take a string as parameter to bootstrap code generation. This requires code generators to accept a parameter which is not possible in #5561.

11. Variadic generics

There are types with variable arities including Func, Action and ValueTuple. It is a common scenario where we want to have a bunch of similar methods that are only different in number of generic types.

12. Basic implementation

Some of interfaces like IEquitable, IComparable etc, given “key properties” could be implemented via generators . This can also be implemented as an analyzer, but with generators it would be totally transparent. ToString overrides also belong to this category.

13. Serialization

Serialization to various formats like json, can be provided at compile-time without using reflection.

@HaloFour commented on Fri Dec 30 2016

I get where you’re going with extern but that seems like it would add a big layer of complexity given the member then must be replaced but the replacement must not call original. With that as a possibility the conerns of ordering becomes much more important.

@alrz commented on Sat Dec 31 2016

@HaloFour Only if you don’t mind the additional backing field per property (or having to return default for methods.) The latter wouldn’t be that much trouble (if you don’t have out parameters) but the former would definitely affect memory usage. Note that extern is just a non-ambigious alternative to #9178 which was originaly mentioned in

@HaloFour commented on Fri Dec 30 2016


The partial keyword might also work there given that’s sort of how it is used today, although with numerous behavioral differences and the compiler eliding calls if never implemented.

Either way, I don’t disagree with the concept, just mentioning the additional complexity it creates. Order of member replacement is already a known concern.

@alrz commented on Sat Dec 31 2016

The compiler should remove the original declaration if none of replaced members called original. This could address the problem with additional backing field but I think extern is still useful on methods.

/cc @HaloFour I’ve updated OP to reflect this.

@bbarry commented on Sat Dec 31 2016

One could implement IDisposable via a generator:

public partial class Resource : IDisposable   
    private IntPtr nativeResource; 
    private AnotherResource managedResource;


Generating async (code lifted from, idea courtesy @roji)

void SendClosureAlert()
    _buf[0] = (byte)ContentType.Alert;
    Utils.WriteUInt16(_buf, 1, (ushort)_connState.TlsVersion);
    _buf[5 + _connState.IvLen] = (byte)AlertLevel.Warning;
    _buf[5 + _connState.IvLen + 1] = (byte)AlertDescription.CloseNotify;
    int endPos = Encrypt(0, 2);
    _baseStream.Write(_buf, 0, endPos);

async Task SendClosureAlertAsync(CancellationToken cancellationToken)
    _buf[0] = (byte)ContentType.Alert;
    Utils.WriteUInt16(_buf, 1, (ushort)_connState.TlsVersion);
    _buf[5 + _connState.IvLen] = (byte)AlertLevel.Warning;
    _buf[5 + _connState.IvLen + 1] = (byte)AlertDescription.CloseNotify;
    int endPos = Encrypt(0, 2);
    await _baseStream.WriteAsync(_buf, 0, endPos, cancellationToken);
    await _baseStream.FlushAsync(cancellationToken);

Allocation free yield (ignore the overflow issue)

public partial class MyFib
    public partial struct FibIter {}
    [Iterator(typeof(FibIter), "Fib", BindingFlags.Public)]
    private IEnumerable<int> GenFib()
        int j = 0, i = 1;
            yield return i;
            j += i;
            yield return j;
            i += j;

// generated
public partial class MyFib
    public FibIter Fib()
        return default(FibIter);
    public partial struct FibIter: IEnumerable<int>, IEnumerator<int>
        private int _1, _2, j, i;
        public FibIter GetEnumerator() => this;
        public int Current => _2;
        public bool MoveNext()
            switch (_1)
            case 0:
                this._1 = -1;
                this.j = 0;
                this.i = 1;
            case 1:
                this._1 = -1;
                this.j += this.i;
                this._2 = this.j;
                this._1 = 2;
                return true;
            case 2:
                this._1 = -1;
                this.i += this.j;
                return false;
            this._2 = this.i;
            this._1 = 1;
            return true;
        object IEnumerator.Current => _2;
        void IDisposable.Dispose() {}
        void IEnumerator.Reset()=> throw new NotSupportedException();
        IEnumerator<int> IEnumerable<int>.GetEnumerator() => this;
        IEnumerator IEnumerable.GetEnumerator() => this;

A very similar thing could be done to implement async differently and save allocations, for example

@alrz commented on Sat Dec 31 2016

@bbarry Worth to mention: #10449.

@eyalsk commented on Sat Dec 31 2016

@alrz you can add #15282 to the list. :)

@ufcpp commented on Sat Dec 31 2016


This is a temporary solution until Records are released, but, I have an generator (by using Analyzer and Code Fix) for Record-like purpose:

Type aliases/Strong typedef

struct Name { string value; }
struct Distance { double value; }

// generated

partial struct Name
    public string Value => value;
    public Name(string value) { this.value = value; }
    // equality, cast operator, etc.

partial struct Distance
    public double Value => value;
    public Distance(double value) { this.value = value; }
    // equality, cast operator, etc.


For mixin use cases, I want an ability to pass this pointer to the mixin.

public class Sample
    NotifyPropertyChangedMixin _npc;

// generated
public class Sample : INotifyPropertyChanged
    NotifyPropertyChangedMixin _npc;

    public event PropertyChangedEventHandler PropertyChanged { add { _npc.PropertyChanged += value; } remove { _npc.PropertyChanged -= value; } }

    protected void OnPropertyChanged(PropertyChangedEventArgs args) => _npc.OnPropertyChanged(this, args);

    protected void OnPropertyChanged([CallerMemberName] string propertyName = null) => _npc.OnPropertyChanged(this, propertyName);

    protected void SetProperty<T>(ref T storage, T value, PropertyChangedEventArgs args) => _npc.SetProperty(this, ref storage, value, args);

    protected void SetProperty<T>(ref T storage, T value, [CallerMemberName] string propertyName = null) => _npc.SetProperty(this, ref storage, value, propertyName);

@asdfgasdfsafgsdfa commented on Mon Jan 02 2017

Personally I would love having a generator write serialization functions for my objects. That should easily be possible as well, right?

How would one go about debugging such generated code tho? I think it would be very important to be able to preview, inspect, debug(including edit and continue) all generated code. Would the additional code be generated into “nested files” (like T4 generated results or code generated by the winforms designer) ?

Manipulating code-text directly might not be the best approach though, so we would end up manipulating the Ast?

So this feature would most likely be some kind of more thightly integrated T4 system (or a variant using the AST)? What alternatives are there that don’t prevent inspection and live debugging of the code?

This could also be utilized in some obfuscators maybe… To that end users might want to debug the obfuscated versions in two ways: Viewing and stepping through the code as if it were unmodified (even though the code might be heavily obfuscated by some post processing code generator) and on the other hand someone might want to view the code as it was really compiled, maybe to find some bug in the obfuscation…


@alrz commented on Mon Jan 02 2017

@asdfgasdfsafgsdfa You can refer to the original proposal here: It would not manipulate AST or anything, it is a “code generator” so it can only add code to the compilation unit. Debugging works directly on the generated code.

@asdfgasdfsafgsdfa commented on Tue Jan 03 2017

@alrz I’m not sure I completely understand. In the original discussion people mentioned that dealing with strings alone would be somewhat of a hassle. Same with replacing vs. adding code.

I would rather see replacement instead of (or in addition to) addition of code and AST modification instead of manipulating code-strings. Should I focus on #5561 then? This issue ( #16160 ) doesn’t close or supersede the discussion over at #5561 right? It is just an alternative approach to solve a similar problem, right?

Oh and another quick question: Has there been any official opinion on those questions? I saw it on a “strong interest” list somewhere, but as I understand the actual implementation and usage is still pretty unclear and heavily discussed. Have any of the people that will decide in the end mentioned their own opinion on those more detailed questions?

@fanoI commented on Tue Jan 24 2017

One other possible use of Generators could be for bitfield structs this implementation used reflection and so was 10'000 times slower than using bitmasks (!) but using Generators the conversion functions to ToUInt64() and ToBinaryString() could be generated at runtime:

@eyalsk commented on Mon Jan 30 2017

Another thing that I thought about is using it in the following manner: ``` [Calculator(Method.Fib, 40)] const int fib = 0;

[Calculator(Method.CubeVolume, 2)] const int cube = 0; ```

@gulshan commented on Mon Jan 30 2017

F# community is discussing a type provider to generate types from types. Felt kind of similar. This comment has some probable usage-

@axel-habermaier commented on Wed Feb 01 2017

Generate PInvoke Code Code generators could also be used to generate PInvoke code from C/C++ header files.

@asdfgasdfsafgsdfa commented on Wed Feb 01 2017

  • Having a code generator that does two-way databinding from my settings class to my custom UI controls (when you have to replace the existing databinding solutions).

  • A generator that provides your own implementation of [SyncVar] (from Unity3D) either in Unity, or in programs that have nothing to do with Unity.

@gulshan commented on Tue Feb 14 2017

I want to add lightweight(local-only) Actor models like nAct and this one to the list- They uses async method calls instead of message passing. A normal worker class attributed as [Actor] can be code generated to have basic thread management(attached to a specific thread) where the methods marked as [Actor.Message] would be put inside a async Task automatically.

Updated 25/03/2017 00:07 13 Comments

Higher Kinded Polymorphism / Generics on Generics


@diab0l commented on Thu Apr 23 2015


Haskell has monads Monads are like bacon C# needs bacon


I have a mad craving for higher kinded polymorphism in C#. And I have no blog to call my own, so here’s an essay on this stuff.

That’s a fancy name with a lot of crazy type theory and papers behind it, but if you have an understanding of how generic types work in C#, the concept is not terribly hard to understand.

What C# with a taste of bacon would look like

public static T<A> To<T, A>(this IEnumerable<A> xs)
    where T : <>, new(), ICollection<> 
    var ta = new T<A>();
    foreach(var x in xs) {
    return ta;

    var data = Enumerable.Range(0, 20);

    var set = data.To<HashSet<>, int>(); // sorcery!
    var linkedList = data.To<LinkedList<>, int>();
    var list = data.To<List<>, int>();

What is going on here?

where T : <>,           // 1
          new(),        // 2
          ICollection<> // 3
  1. T is constrained to be a generic type definition with one type parameter.
    • In CLR lingo: T is a generic type of arity 1
    • In type theory lingo: T is a type constructor of kind * -> *
  2. Also it should have a default constructor
  3. And it should implement ICollection<> (meaning for each type X, T<X> implements ICollection<X>)

Using this you could convert an IEnumerable<T> to any other type that is a collection. Even all the ones you have never thought about, including the weird ones from libraries which do not get a whole lot of support because nobody uses them (except of course you).

Like HashSet (such a weird collection) or LinkedList (you would need to be crazy). They are fine citizens of System.Collections.Generic and very useful, yet they don’t get the love they deserve, because they are not as famous as List<> and implementing To...() methods in Linq for all of them would be a painful task.

However, with higher kinded polymorphism, they could all get a general concise implementation.

Where’s the rest of that bacon?

That general conversion function is just the tip of the iceberg. You can do all kinds of crazy transformations with higher kinded polymorphism. And since it allows for such lovely abstractions you only have to implement them once.

Or better yet: Somebody else implements them in a NuGet package and you don’t need to care. Composition over coding.

Haskell? Is that a dog’s name?

That NuGet package could be called something neat and concise like, say, Prelude. Yes, what a fine name! There are some other party poopers, but the lack of Higher Kinded Polymorphism is the biggest road block in stealing all the juicy bacon from the Haskell community.

You know how Linq is awesome? Do you also know how Linq is an implementation of the List Monad? (kind of) Well, there are lots of more Monads in Prelude and most of them are awesome. And currently a bitch to implement in C#.

Plus, HKP would allow to abstract on the concept of Monads! And on concepts like Tuples (never heard of them), Functors (not what you’re thinking), Arrows, Categories and all the crazy math that comes with them.

I’ve put together a gist of how wonderful this would be in combination with some other features for implementing Maybe.

I don’t know what you’re talking about, but it sounds expensive

Let’s first look at a summary of the benefits that HKP would bring to C# 1. This feature would make C# the most relevant functional programming language with sub-typing and strong static type checking 2. A natural extension to the work done by the Linq team 3. A natural extension to generics 4. Much more natural porting of Haskell code (think about it: years upon years of research on bacon finally in a consumable form) 5. A real implementation of the Maybe monad (about time) 6. More expressiveness without sacrificing static typing 7. HKPs allow something kind of Meta-programming without the nonsense. Whoever used C++ before knows how easily template programming becomes a nightmare. On the other hand, Haskell programmers are delighted by their juicy type system. 8. Have I mentioned bacon?

Now let’s talk about work: - Before anything else, a proper implementation would require CLR support - That would be quite involved, but not impossible. Also I believe it can be implemented as a strict extension to the current metadata without breaking existing libraries - As a consequence, the F# crowd would benefit from this. I bet they are overjoyed to hear about bacon - As another consequence, implementing Haskell on top of .Net would be much less of a pain - C# Syntax - In C# we can already refer to generic type definitions in typeof() as in typeof(IDictionary<,>) - I think the where T : <,> clause seems neat, but is probably not powerful enough (no way to constrain variance on T’s parameters or ‘swap’ parameters for implementations) - maybe more like where T<,> : <U, X>, IDictionary<X, U> - or rather void Foo<A<B, C>, D>() where A<B, C> : IDictionary<C, B> where B : class - Well, the syntax needs work. - Type checking - it’s not work if it’s fun

But I’m a vegetarian

Think of Bacon as an abstract higher kinded type class with non-carnivorous implementations

@VSadov commented on Thu Apr 23 2015

There is an obvious issue with CLR not supporting higher-kinded types. Some languages (Scala, F*) seem to be able to get around such limitations of underlying runtimes, but I do not know how well that works in practice.

@MadsTorgersen if not already there, higher kinded generics may need to be added to the list of possible future directions. Perhaps next to the Meta-programming :-).

@diab0l commented on Fri Apr 24 2015

From what I’ve seen on UserVoice and SO, there’s encodings to do this stuff in F# using their inlining system.

But these encodings tend to be ugly, complex and compromising. That definitely doesn’t sound like something I want for C#.

It’s also worth noting that the feature request has been rejected for F#, since it would require CLR support, it’s costs outweigh it’s benefits, etc., however that was before Core was open sourced and all this crazy open development started

@GSPP commented on Wed Apr 29 2015

@diab0l can you give more examples for what this would be good for? I always wondered what practical things you can do with higher kinded types.

For C# 6 they are not going to touch the CLR but there are other potential changes queued up (e.g. structural types and default implementations for interface methods). Maybe C# 7 can batch all of them into a new incompatible runtime.

@isaacabraham commented on Sat May 02 2015

@diab0l I don’t think it’s been rejected from F# because of lack of CLR support - I believe it could be done in a type erased style - it’s simply not high up enough the feature list on uservoice.

I’ve also been hearing people in the F# community asking for GADTs first.

@diab0l commented on Sun May 03 2015

@isaacabraham Of course it could be implemented type-erased style, but that’s an unpleasant can of worms.

If it were implemented like C++ templates, it would be a source level feature, meaning you can’t have higher kinded polymorphic methods in assemblies. I think we can reach consensus that such a feature would only pollute the language.

If it were implemented like Java generics, it would lead to all sorts of nonsense. For example you couldn’t do typeof(M<>) inside the hkpm, since the type has been erased.

So to implement this feature in a reusable way, there would at least need to be a way for the C# compiler to 1) actually compile the hkpm to an assembly 2) encode the extra information about the type (via an attribute perhaps) 3) decode that extra information so the method from the assembly can actually be used 4) use some very clever IL hacks to make it “just work” for the implementing method and at the call site

Now since it has to happen at assembly level, that would essentially be a language-agnostic extension to the CLR’s common type system. And since the extra information would need to be encoded, it wouldn’t be type erased at all, it would be reified, albeit in an ugly hack.

If we are going to extend the CLR’s common type system with some cool feature anyway, then I would suggest we do it right: by adding it as a first-level feature to the standard and implementing it right instead of via (clever or not) ugly hacks.

@diab0l commented on Sun May 03 2015

@GSPP To be frank: I can’t give you a whole lot of examples. Until recently I just haven’t given it any thought.

However, what I can tell you is that, as an abstraction feature which abstracts over abstract data structures it would primarily allow us to write some neat libraries. Also, in Haskell it’s used quite natural as part of every day programming for everything, so it allows for a different style in coding.

The best example I can currently come up with is abstracting over the Linq Zoo. Consider you write a function which only relies on the shared semantics of Linq-to-Objects (IEnumerable<>), Database Linq (IQueryable<>), PLinq (IParallelEnumerable<>), Reactive extensions (IObservable<>), wherever IQobservable<> lives and whichever other crazy Linq-style APIs everybody whips up.

The only difference in the shared methods of these APIs (Select(), First(), …) is a) the data type they operate on (IEnumerable<>, IQueryable<>, …) and b) whether they straight up take Func<> or Expression<Func<>> as arguments.

We cannot currently abstract over a) or b) :(

Currently, as a library author who intends to write a method which works on either of these APIs, you would have to write your function each time for each API (without casting to IEnumerable<> which forces a query to lose it’s semantics, for example), even if your function does not care whether it queries an IEnumerable<> or an IParallelEnumerable<> or even an IQueryable<>.

With the introduction of HKPMs, and a retrofitted static interface ILinq<> for Linq implementations, you could write your awesome function once and it would work not only on these 5 Linq implementations, but on every future implementation as well.

@diab0l commented on Sun May 03 2015

Please do not understand me wrong. I am well aware that implementing this feature is going to be a big undertaking and not going to happen anytime soon, if at all. At least not until the CLR ports have become stable and established.

Also, it’s not clear whether higher kinded polymorphism would be a good fit for C#. Maybe there’s some better way to raise the bar on abstraction.

What I think is clear is that C# and the CLR have incorporated features from functional languages with great success and the trend for using C# as a functional language is strong.

Having all this in mind, I think it’s worthwhile to explore how C# would benefit from first-level support for Higher Kinded Polymorphism and what such a syntax would be like.

Also I think that, along with related issues it raises the question: Should the common type system as a one-size-fits-all solution be all that common?

@asd-and-Rizzo commented on Mon May 04 2015

There is the example for HKT in C#

@MI3Guy commented on Mon May 11 2015

Another advantage would be the ability to use value type collections (e.g. ImmutableArray<T>) without boxing them or writing a method that uses that specific collection.

@DrPizza commented on Mon Jun 20 2016

Also, it’s not clear whether higher kinded polymorphism would be a good fit for C#.

Template template parameters are a good fit for C++, so it’s natural enough that generic generic parameters would be a good fit for C#. Being able to make Linq methods independent of concrete classes seems nice enough.

@gafter commented on Fri Sep 04 2015

@TonyValenti Please suggest that in a new issue.

@aluanhaddad commented on Wed Sep 16 2015

I would love to see this addition if some future version of the framework enables it.

@oscarvarto commented on Mon Jun 20 2016

+1 I am also craving for some bacon!

@mooman219 commented on Thu Jun 23 2016

+1 Ran into an issue where I needed this today

@aluanhaddad commented on Thu Jun 23 2016

It’s too bad this is one of those things that definitely is going to require CLR support. But then again maybe it’s a good opportunity for the CLR to evolve.

@Pauan commented on Fri Sep 23 2016

Please excuse the F#, but here is an example of where higher-kinded polymorphism would help out a lot.

There is a function called map which is used quite a lot in F#: : ('a -> 'b) -> list<'a> -> list<'b> : ('a -> 'b) -> array<'a> -> array<'b> : ('a -> 'b) -> seq<'a> -> seq<'b> : ('a -> 'b) -> option<'a> -> option<'b> : ('a -> 'b) -> Event<'a> -> Event<'b> : ('a -> 'b) -> Async<'a> -> Async<'b>

Its behavior is fairly simple. It allows you to take a “container” (like a list, array, dictionary, etc.) and transform every element inside of the container: (fun x -> x + 10) [1; 2; 3; 4]

The end result of the above code is [11; 12; 13; 14]. In other words, for every element in the list, we added 10 to it.

As you can see, map is used for many different types. It would be nice to be able to write functions that can work on any type as long as that type has a map function.

Because F# has interfaces (just like C#), you might try this:

type IMap<'T> = interface
  abstract member Map: ('a -> 'b) -> 'T<'a> -> 'T<'b>

// This is just for convenience: it is easier to use and auto-casts to the IMap interface
let map fn (a : IMap<'T>) =
  a.Map(fn, a)

And then you could implement the IMap interface on any class or discriminated union:

type Option<'a> =
  | None
  | Some of 'a

  interface IMap<Option> with
    member this.Map(fn, a) =
      match a with
      | None -> None
      | Some a -> Some (fn a)

You can then use the map function on anything which implements the IMap interface. And you can write generic code which uses the map function:

let mapadd a b =
  map (fun x -> x + b) a
// Examples of using it:
mapadd (Some 1) 5   // the end result is (Some 6)
mapadd [1; 2; 3] 5  // the end result is [6; 7; 8]

The mapadd function will work on any type which implements IMap. This is marvelous: without interfaces, we would need to write the mapadd function multiple times: once per type. In other words, we would need List.mapadd, Array.mapadd, Option.mapadd, Async.mapadd, etc.

But with interfaces, we can write it once and reuse it for many types! And of course static typing is fully preserved: you get a compile-time error if you make any mistakes, such as calling mapadd on a type which does not implement IMap.

The mapadd function is very simple, but this also works with more complex functions: you can write a complex function which works with anything which implements IMap, rather than needing to copy-paste the complex code for each type.

Unfortunately, this does not work, because .NET lacks higher-kinded polymorphism:

error FS0712: Type parameter cannot be used as type constructor

In other words, you cannot specify the type 'T<'a> where 'T is a type parameter (like in the IMap interface).

This also applies to other functions as well, like bind, filter, flatten, fold, iter, etc.

Quite a lot of the list and array functions would benefit from this. In fact, any “container” (list, seq, Async, Option, etc.) can benefit a lot from higher-kinded polymorphism. Of course there’s plenty of other examples where this is useful (monads, arrows, etc.) but those examples tend to be familiar only to functional programmers.

Unfortunately I do not know C# well enough to give any examples of where higher-ordered polymorphism would be useful in C#, but I hope that map is familiar enough that C# programmers can see how this would be useful.

So, in short: higher-kinded polymorphism is simply a more powerful form of generics. It is useful for precisely the same reason why generics and interfaces are useful: it allows us to write code which can be reused, rather than reimplemented over and over again.

P.S. If somebody with more C# experience than me could translate the above F# code into equivalent C# code, that may help others with understanding what higher-kinded polymorphism is, how to use it, and what it’s good for.

@orthoxerox commented on Fri Sep 23 2016

This repo demonstrates a quite interesting approach to typeclasses in c#:

@Alxandr commented on Tue Sep 27 2016

Actually, both C# and F# has higher kinded polymorphism to an extent. It’s what allows computational expressions in F#, and LINQ in C# to work. For instance, when you in C# do

for item in list
select item + 2

this gets converted into something like

list.Select(item => item + 2)

or in F# (fun item -> item + 2) list

This is done through some compile-time constraints that we are unfortunately unable to extend within the language. Basically what we’re asking for is the ability to create interfaces like this:

interface ILinqable<TSelf<..>> {
  TSelf<T1> Select(Func<T1, T2> func);

class List<T> : ILinqable<List<..>> {
  // need to implement select

@orthoxerox commented on Tue Sep 27 2016

@Alxandr not really, LINQ query expressions are a purely syntactic convention.

@aluanhaddad commented on Tue Sep 27 2016

@orthoxerox yes they are but the result is higher kinded typing for a very limited subset of operations. Consider:

static class EnumerableExtensions
    public static List<R> Select<T, R>(this List<T> list, Func<T, R> selector) => 

    public static List<T> Where<T>(this List<T> list, Func<T, bool> predicate) =>

    public static HashSet<R> Select<T, R>(this HashSet<T> set, Func<T, R> selector) => 
        new HashSet<R>(set.asEnumerable().Select(f));

    public static HashSet<T> Where<T>(this HashSet<T> set, Func<T, bool> predicate) =>
        new HashSet<T>(set.asEnumerable().Select(f));

var numbers = new List<int> { 0, 1, 2, 3, 4 };

List<string> values = 
    from n in numbers
    where n % 2 == 0
    select $"{n} squared is {n * n}";

var distinctNumbers = new HashSet<int> { 0, 1, 2, 3, 4 };

HashSet<int> distinctSquares = 
    from n in distinctNumbers
    select n * n;

@aluanhaddad commented on Tue Sep 27 2016

The problem is that it has to implemented for every collection type in order to be transparent. In Scala operations like map and filter take an implicit parameter which is used as a factory to create new collections choosing the correct collection type based on the source.

@OzieGamma commented on Tue Sep 27 2016


Indeed overloading lets you use functions as if it was higher-kinded polymorphism.

But you still have to write all those overloads. That’s what we’d like to avoid.

@isaacabraham commented on Tue Sep 27 2016

This isn’t even overloading. There is the ability to “hijack” the LINQ keywords if your types have method that have certain names and signatures - same as foreach really.

So in that way I suppose there’s some similarity but in my limited understanding of HKTs, it’s not the same - you don’t have the reuse that they give you.

@aluanhaddad commented on Tue Sep 27 2016

I am not suggesting equivalence. I am suggesting that it is possible to slightly abstract over the type that is itself parameterized by using LINQ method patterns. I was not proposing this as an alternative to higher kinded types as it clearly is not. If Rx is not hijacking LINQ keywords, I hardly think this is, but it is certainly surprising and I would avoid this pattern.

@Pauan commented on Tue Sep 27 2016

@Alxandr It’s true that LINQ/computation expressions allow you to use the same syntax on multiple different types, but it is a hardcoded syntax trick.

Here is an example in C# where LINQ will not help you:

This is the reason why higher kinded types are useful. I know you’re already aware of this, I’m mentioning this for the sake of other people who think that LINQ is “good enough”.

It’s also possible to hackily add in support for overloaded functions in F# by abusing inline functions and statically resolved type parameters:

This is essentially using the same trick that LINQ is using: any type which happens to implement a static method with the right name and right type will work.

But unlike interfaces: - All of the functions need to be inline, including any functions which use inline functions (this can cause code bloat and long compilation times). - The method names can collide (you can’t define two different “interfaces” with the same method name). - This trick only works in F#, it doesn’t work in C#.

So we still need higher-kinded types.

@Alxandr commented on Wed Sep 28 2016

@Pauan I am very well aware of this. I just tried to make a really simple explanation explaining to people who do not know what we want. LINQ is as you said a hardcoded compiler trick. Inline in F# is a bit more of a flexible compiler trick. We would like the CLR to support higher kinded generics. Although, you could make higher-kinded a compiler-only feature it would be better if the CLR supports it.

@diab0l commented on Wed Sep 28 2016

I would like to add some more clarification, especially since what I originally wrote may not be clear enough for people unfamiliar with higher kinded types. So here’s some theory. Somebody correct me if I’m wrong.

Proper Types Proper types are those inhabited by values which are not types themselves. Examples are int, bool, string and any other value (struct) or reference type (class). Generic types closed over all their type arguments (such as List<int> or KeyValuePair<string, int>) are also proper types.

Kinds Kinds are a formalization for type-level functions. There’s * which reads as type, which is no coincidence, because all proper types are of kind *. There’s also the operator -> which allows us to create kinds like * -> * which is a type function taking a proper type as parameter and returning a proper type as result. (Examples: List<> and Set<>)

Such type functions are also called type constructors, since you give them some parameters and they ‘construct’ a type for you. In other words, applying the proper type int to the type constructor List<> yields the proper type List<int>.

Generic types So, * -> * is the kind of generic types with arity 1 (List<>, Set<>, etc.), * -> * -> * is the kind of generic types with arity 2 (Tuple<,>, KeyValuePair<,>, Dictionary<,>). and so on. These are things we can already express within C# and the CLR and we call them generic types.

To slightly complicate the picture, we also have generic methods which are in a sense function constructors. They are like type constructors, except they do not return a proper type, but instead a proper function.

Higher kinded types What we cannot express are higher-kinded types. An example would be (* -> *) -> * -> *, which is a type function taking a type constructor and a proper type as argument and returning a type. Here’s what the signature could look like in C#:

T<U> Foo<T<>, U>();

Another useful kind would be (* -> *) -> (* -> *) which could be used to have a general mapping from one type constructor to another. To make this meaningful, we would need to know something about the types constructors and the way to do that would be constraints. We can currently neither have a type argument which is a type constructor itself and even if we could, we couldn’t meaningfully constrain it.

There are other things we cannot express. For example you can’t have a variable of type Action<> or of type Func<,>, so you can have generic methods, but not generic lambda methods.

Tl;dr; To sum it up, today we can range over the U in T<U>. What I want is to be able to range over the T in a meaningful way (with constraints).

@Alxandr commented on Wed Sep 28 2016

While I (think) I perfectly understood all of that (I’ve done haskell), I think the * -> * and (* -> *) -> * annotation is confusing for people who are not used to functional programming languages. I remember starting in F# and reading int -> int -> int is confusing if you’re not used to it.

To translate the same into something akin to TypeScript (C# doesn’t really have a func annotation) it could look something like this:

List: (T: *) => List<T>
Dictionary: (Key: *, Value: *) => Dictionary<Key, Value>

whereas what we would like to express is

Mappable: (T: (* => *)) => Mappable<T<>>

Not sure about the syntax, but I do believe we might want to try to write things in a format thats more similar to what most C# devs are used to (at least while we’re in the roslyn repo).

Or if an interface would help:

interface IMappable<T<>, A> {
  T<B> Map<B>(Func<A, B> func);

class List<T> : IMappable<List<>, T> {
  List<B> IMappable.Map<B>(Func<A, B> func) {
    List<B> result = new List<B>(Count);
    int index = 0;
    foreach (T item in this) {
      result[index++] = func(item);

Syntax is obviously something that should be discussed, and I don’t know what works best, but personally I think figuring out how things should work on the CLR level first is probably the best idea. Also, I haven’t written C# in a while, so apologies if there are any glaring errors in my snippets :)

@diab0l commented on Thu Sep 29 2016

@Alxandr I do agree completely and gosh, I hope this stuff eventually makes it’s way into typescript.

The interface is probably the best analogy. There are also proposals for ‘static interfaces’ which would allow for similar abstractions.

@toburger commented on Wed Oct 12 2016

it seems that there’s a light at the end of the tunnel:

@gusty commented on Wed Oct 12 2016

@toburger That technique doesn’t cover Higher Order Kinds. It allows you to define ‘first-order-kind typeclasses’ like <code>Monoid</code> and generalize over numeric types, but in order to be able to represent functors, applicatives or monads you still need to do something at the CLR level.

@Opiumtm commented on Wed Oct 12 2016

Your example is perfectly possible using current C# syntax

        public static T ConvertTo<T, TItem>(this IEnumerable<TItem> src)
            where T : ICollection<TItem>, new()
            var result = new T();
            foreach (var item in src)
            return result;

        public static void Test()
            var data = Enumerable.Range(1, 10);
            var list = data.ConvertTo<List<int>, int>();
            var set = data.ConvertTo<HashSet<int>, int>();

@sideeffffect commented on Wed Oct 12 2016

if you haven’t heard, there’s this for HKT in F#

now I’m wondering, that this should also work in C# of course, this is more like a hack/workaround, but until we have a proper implementation of HKT in the language(s) and/or CLR, this could aleviate some problems

@Opiumtm commented on Wed Oct 12 2016

And from mentioned above article.


static T<string> GetPurchaseLogs(
                   T<Person> people, 
                   T<Purchase> purchases)
                   where T<?> : LINQable<?>
    return from person in people
           from purchase in purchases
           where person.Id == purchase.PurchaserId
           select person.Name + " purchased " + purchase.ItemName;

Exact generic logic to example above:

static IEnumerable<TItem3> CombineCheck<TArg1, TArg2, TItem1, TItem2, TItem3>(TArg1 a, TArg2 b, Func<TItem1, TItem2, TItem3> combine, Func<TItem1, TItem2, bool> check)
    where TArg1 : IEnumerable<TItem1>
    where TArg2 : IEnumerable<TItem2>
    return from item1 in a
        from item2 in b
        where check(item1, item2)
        select combine(item1, item2);

So, T<?> seems to be just short-hand for existing type constraints that require to explicitly declare type arguments of TItem1, TItem2 item types and TArg1, TArg2 enumerables.

And LINQable<?> here isn’t possible and not a case of higher-order polymorphism because LINQ queries use structural typing (LINQable is every type which provide standard Where(), Select() and so on methods - it isn’t bound to exact interfaces, it use structural typing instead)

So, C# should officially support some forms of structural typing as it already support structural typing for awaitables and LINQ-ables and don’t support general-purpose structural typing

@gabomgp commented on Thu Nov 10 2016

@Opiumtm I think structural typing is a very big change for C#, maybe something similar to traits/type classes/protocols would be better. Traits in Rust are very similiar to extension methods, but think in interfaces than extends the behavior instead of methods.

@asd-and-Rizzo commented on Thu Nov 17 2016

I am not sure, but seems case like next could also get use of HKT.

Given next usage of CsvHelper:

        /// <summary>
        /// Gets the csv string for the given models.
        /// </summary>
        /// <param name="models">The data models.</param>
        /// <returns>The csv string.</returns>
        [SuppressMessage("Microsoft.Design", "CA1006:DoNotNestGenericTypesInMemberSignatures", Justification = "OK")]
        public static string ToCsvString<TEntity, TMap>(IEnumerable<TEntity> models) where TMap : CsvClassMap<TEntity>
            using (var content = new MemoryStream())
                var textWriter = new StreamWriter(content, Encoding.UTF8);
                var config = new CsvConfiguration();
                var writer = new CsvWriter(textWriter, config);
                content.Position = 0;
                return new StreamReader(content).ReadToEnd();

And call:


With HKT is seems would be next:

public static string ToCsvString<TMap<TEntity>>(IEnumerable<TEntity> models) where TMap : CsvClassMap<TEntity>

With HKT we will get less code to type when method is called - only 1 name of class, instead of 2:


And I do not understand Microsoft CodeAnalysis(CA) errors I see - may be will have no reason for these given HKT in place:

Severity    Code    Description Project File    Line    Suppression State
Error   CA1004  Consider a design where 'CsvBuilder.ToCsvString<TEntity, TMap>(IEnumerable<TEntity>)' doesn't require explicit type parameter 'TMap' in any call to it. CsvBuilder.cs   10  Active

Does CA suggest to make HKT into C# for proper design?

@bondsbw commented on Wed Dec 14 2016

The following DateTimeCollection type appears to satisfy the requirements for T in the original example:

public class DateTimeCollection<U> : ICollection<U>
    where U : DateTime

But DateTimeCollection has a type parameter constraint, so it would need to fail when that constraint is not satisfied:

var data = Enumerable.Range(0, 20);
var dtCollection = data.To<DateTimeCollection<>, int>();  // int does not inherit DateTime

The primary location of the problem is in the return type:

public static T<A> ... // Should be an error here, cannot construct 'T' with parameter 'A'

Because there is no guarantee that A is allowed as a parameter of T. So that should be specified in the type constraints for T:

public static T<A> To<T, A>(this IEnumerable<A> xs)
    where T : <A>, new(), ICollection<> 

@bondsbw commented on Wed Dec 14 2016

Perhaps the type constraint for ICollection<> needs to be specified as well:

public static T<A> To<T, A>(this IEnumerable<A> xs)
    where T : <A>, new(), ICollection<A> 

Otherwise wouldn’t this line fail?


@alexandru commented on Thu Feb 02 2017

@aluanhaddad OOP subtyping / overloading doesn’t help with the most useful operation of all, which is bind / flatMap (I think it’s called SelectMany in .NET) because in OOP you have co/contra-variance as a natural effect of subtyping + generics and function parameters are contra-variant, which means you’ll lose type info.

In practice this means you can’t describe such operations by means of inheritance (and please excuse the Scala syntax, I’m just a newbie in F#):

trait Monad[F[_]] {
  // ...
  def flatMap[A,B](source: F[A])(f: A => F[B]): F[B]

object FutureMonad extends Monad[Future] {
  def flatMap[A,B](source: Future[A])(f: A => Future[B]): Future[B] = ???

Well, in Scala you can describe flatMap with inheritance, but then you need another feature in the type-system, called F-bounded types. Scala can do that too, but this is another feature you don’t see in other OOP languages, since this also needs higher-kinded types to be useful, (see article with details):

trait MonadLike[+A, Self[+T] <: MonadLike[T, Self]] { self: Self[A] =>
  def flatMap[B](f: A => Self[B]): Self[B]

class Future[+A] extends MonadLike[A,Future] {
  def flatMap[B](f: A => Future[B]): Future[B] = ???

Looks ugly but it is very useful for sharing implementation while not downgrading to the super-type in all those operations.

Updated 27/03/2017 09:44 1 Comments

Declaration of ref/out parameters in lambdas without typename


@ViIvanov commented on Sat Feb 07 2015


I have a little suggestion for C# language. Let us have delegate like this:

delegate T Parse<T>(string text);

and we want to create an instance:

Parse<int> parse = text => Int32.Parse(text);

All is OK. What about ref/out parameters in a delegate?

delegate bool TryParse<T>(string text, out T result);

when we want to create an instance…

TryParse<int> parse1 = (string text, out int result) => Int32.TryParse(text, out result);

…we shoud to specify types on parameters.

Why is this required? What about a syntax below:

TryParse<int> parse2 = (text, out result) => Int32.TryParse(text, out result);


@mikedn commented on Sat Feb 07 2015

In this particular case you don’t need to write any of the parameter stuff, it’s just:

TryParse<int> parse1 = Int32.TryParse;

@alanfo commented on Sun Feb 08 2015

This would be a worthwhile improvement to type inference, in my view.

As ref and out are full keywords, they can’t possibly be type names and so there appears no reason why the compiler won’t be able to infer the type from the delegate signature.

@paulomorgado commented on Sun Feb 08 2015

I’m not sure I understand what you are proposing.

Would you care to elaborate a bit more?

@ViIvanov commented on Mon Feb 09 2015

@paulomorgado Of course! Let us see a code below:

delegate T Parse<T>(string text);
delegate bool TryParse<T>(string text, out T result);

static void Main() {
  // We can create an instance of Parse<int> like this:
  Parse<int> parseHex1 = (string text) => Int32.Parse(text, NumberStyles.HexNumber);
  // or like this (and, I think, this way is more simple and more readable):
  Parse<int> parseHex2 = text => Int32.Parse(text, NumberStyles.HexNumber);

  // To create an instance of TryParse<int> delegate
  // we must explicitly specify a types of arguments in a lambda expression:
  TryParse<int> tryParseHex1 = (string text, out int result) => Int32.TryParse(text, NumberStyles.HexNumber, null, out result);

  // And we can not now use a sintax like this:
  TryParse<int> tryParseHex2 = (text, out result) => Int32.TryParse(text, NumberStyles.HexNumber, null, out result);

Why this is meaningful: 1. In some scenarios user can have a delegates with a few (three, four, …etc) parameters and when at least one of them has a ref or out modifier user must explicitly specify types of all “delegate parameters”. 2. Follows from previous - we can not use anonymous types as type-parameters in delegates with ref or out parameters.

@omariom commented on Mon Feb 09 2015


@paulomorgado commented on Mon Feb 09 2015

I ink you are missing the fact that, although you can’t declare by reference type parameters in C#, when a parameter is expressed as of being of type ref T (or out T, which is the same for the CLR - just extra validation from the compiler), the type is, actually, &T, which is not the same as T.

@ViIvanov commented on Mon Feb 09 2015

@paulomorgado Excuse me, can you explain what do you mean? Why in you point of view

TryParse<int> tryParseHex1 = (string text, out int result) => Int32.TryParse(text, NumberStyles.HexNumber, null, out result);

is correct (it is valid C# code) and

TryParse<int> tryParseHex2 = (text, out result) => Int32.TryParse(text, NumberStyles.HexNumber, null, out result);

is not? In a both examples, types of arguments exactly the same. But, in the second line, it calculated by the compiler, not specified by user explicitly.

@paulomorgado commented on Mon Feb 09 2015

I was trying to understand your issue, and I think I got it: the compiler should be able to infer the types from usage when there are out parameters. Is that it?

@ViIvanov commented on Mon Feb 09 2015

@paulomorgado Exactly! I’m sorry for my bad and poor English.

@alrz commented on Thu Nov 19 2015


@Thaina commented on Thu Jan 14 2016



@asvishnyakov commented on Fri Jan 15 2016


Updated 25/03/2017 00:09

Proposal: allow non-public operators


@jskeet commented on Fri Jan 08 2016

Currently, all operators have to be public. This can reduce readability of internal-only code where an operator would otherwise be natural. This proposal is to allow things like:

internal static Foo operator +(Foo foo, Bar bar)

Use-case: in Noda Time we have types of: - public struct Instant - public struct Offset - internal struct LocalInstant

We have operations for adding an Offset to an Instant to get a LocalInstant, and the reverse subtraction operation. These would idiomatically be implemented using operators, and were while LocalInstant was public (long before the first release). It’s annoying that they can’t be :(

(I don’t know whether this would require a CLR change or whether it could just be a C# language change…)

I suspect there isn’t much benefit in having other access modifiers beyond public and internal, but that’s probably best discussed after deciding whether or not even allowing internal is desirable.

@leppie commented on Fri Jan 08 2016

Nice suggestion :+1:

I cant see any issue with the CLR for this. Edit: The only mention is the naming rules for operators, and nothing else beyond that.

I can’t even think it would be hard to implement in the compiler.

@HaloFour commented on Fri Jan 08 2016

I’m curious as to the reason that non-public operators aren’t allowed. This restriction is explicitly stated in the C# spec, §10.10:

The following rules apply to all operator declarations: - An operator declaration must include both a public and a static modifier.

@AdamSpeight2008 commented on Fri Jan 08 2016

@HaloFour Me Too.

@ViIvanov commented on Mon Jan 11 2016

@HaloFour,l @AdamSpeight2008

May be, reason is next: moving code with with non-public operators used from one project to another can have an unexpected semantic.

@ViIvanov commented on Mon Jan 11 2016

May be, compiler should require, that visibility of operator is same as operand with lowest type visibility?

For example, we should not have an ability to create an internal operator for two public types, but can create an internal operator for public and internal types.

@alrz commented on Mon Jan 11 2016

@ViIvanov Can you give an example? I think that would be the issue with moving any code!

@AdamSpeight2008 commented on Mon Jan 11 2016

We could allow the visibility to be treated as a scoping clarifier. eg var baz = internal( foo + bar ); Which will use an internal implementation of the operator plus.

@svick commented on Mon Jan 11 2016

@AdamSpeight2008 To me, that sounds like a separate feature on top of this one and one that is not very useful.

@AdamSpeight2008 commented on Mon Jan 11 2016

@svick It would be useful as it would let state that you explicitly what the internal overload.

public static t0 operator +( t0 x, t0 y ) {..}
private static t0 operator +(t0 x, t0 y) {...}

@svick commented on Mon Jan 11 2016

internal overload? You can’t have two methods that differ only by their access modifier in C# or IL. I don’t see why would you want that and this issue is not about that.

I assumed it was about situations like:

public class A
    public static A operator +(A a, B b) { … }

public class B
    internal static A operator +(A a, B b) { … }

But that still doesn’t seem very useful to me.

Updated 25/03/2017 00:08

(Proposal) Allow const members for value types using `default(...)`


@jskeet commented on Fri Dec 18 2015

Currently there is a small disparity between optional parameters and const declarations. While you can have an optional parameter with a default value of new SomeValueType() or default(SomeValueType) you can’t use const for non-simple value types. It would be really nice if this were permitted - possibly via some attribute like const declarations for decimal values use - to make default arguments more readable. Compare the intent of:

public void FooAsync(CancellationToken cancellationToken = default(CancellationToken))


public void FooAsync(CancellationToken cancellationToken = CancellationToken.None)

The latter doesn’t compile, because CancellationToken.None isn’t a constant expression - but it could have been. (We couldn’t change None to be a const for compatibility reasons, but think of future uses… and a new member could be introduced elsewhere, potentially.)

I don’t know whether we should also allow a 0-argument constructor call, as is currently allowed for default argument values… I’m leary of that just in case custom parameterless structs (proposed but then dropped for C# 6) are ever introduced. I think we’ve already got an issue there with default argument values - I wouldn’t want to make it worse.

@leppie commented on Fri Dec 18 2015

Maybe the compiler should try emit the code differently for these cases (or IMO all of them). IOW:

public void FooAsync() {

public void FooAsync(CancellationToken cancellationToken) {

@svick commented on Fri Dec 18 2015

@leppie How would that work with multiple optional parameters? I.e. something like:

public void Foo(Struct1 param1 = Struct1.Value, Struct2 param2 = Struct2.Value)


Foo(param1: myValue);
Foo(param2: anotherValue);

@leppie commented on Fri Dec 18 2015

@svick: Good point.

@HaloFour commented on Fri Dec 18 2015

So what exactly is this proposal? A way to decorate a static property of a struct so that the value represented by that property could be considered a blittable constant that can be embedded in the CLR metadata for the optional parameter?

@jskeet commented on Fri Dec 18 2015

The proposal is for this to be valid, as an example:

public class Foo
    public const CancellationToken None = default(CancellationToken);

    public Task FooAsync(CancellationToken cancellationToken = None)

@alrz commented on Tue Dec 22 2015

Combination of #2401, #5474 and #952 (although, for non-enums)?

@aluanhaddad commented on Fri Dec 18 2015


@HaloFour commented on Fri Dec 18 2015

@jskeet Limited specifically to “aliases” of default(T) or also supporting custom values?

The original “proposal” isn’t really a proposal as it is identifying a potential shortcoming. I’m trying to get specific syntax and behavior that you’d think would solve the shortcoming.

So this is really two issues. One would be expanding on what is permitted to be embedded as a const. This is largely limited by the CLR to the types that it understands as encoded as BLOBs directly in the assembly metadata. However, the C# and VB.NET compilers both already sort-of workaround this limitation for the decimal and Date data types respectively by converting them into static readonly fields and using well known attributes to encode their binary representation.

Second would be a way to encode that an optional parameter refers to one of these constants. Even if you could define a constant of a specific value (or the default value) it is that value that is embedded as the default value of the parameter, not any reference to the parameter. Otherwise the optional value is ultimately just the value of the constant, not the constant, and callers would still see [cancellationToken = default(CancellationToken)] and not [cancellationToken = None], e.g.

public class Foo {
    public const int None = default(int); // legal

    public Task FooAsync(int token = None); // also legal

FooAsync(); // intellisense shows default(int), not Foo.None

@jskeet commented on Sat Dec 19 2015

@HaloFour: I’m only proposing allowing default(T) for the moment. While I would welcome more general const-ing, that would be a more in-depth proposal.

And yes, it would be good for an optional parameter to encode how its default value was derived (probably via attributes) - although there’s accessibility issues there as well (e.g. when generating documentation or using Intellisense, if the caller wouldn’t have access to the member in question, then default(T) would probably be best.

@alrz commented on Wed Dec 30 2015

@jskeet I think #7737 syntax, is far better than yet too verbose F(CancellationToken cancellationToken = CancellationToken.None) and it doesn’t look awful like F(CancellationToken cancellationToken = default(CancellationToken)), don’t you think?

@jskeet commented on Wed Dec 30 2015

@alrz: well you would think that, given that it’s your proposal :) But in the same vein, I prefer mine… Names allow semantics to be expressed clearly.

@alrz commented on Wed Dec 30 2015

@jskeet Back to your proposal, given that None is a property, how would you know that it is a constant? and yet making it a readonly variable doesn’t work. Although, I believe if it is marked as a pure property (#7561) the compiler could figure it out.

@jskeet commented on Wed Dec 30 2015

@alrz: As I’ve already said, it’s too late for CancellationToken.None itself:

We couldn’t change None to be a const for compatibility reasons, but think of future uses… and a new member could be introduced elsewhere, potentially.

@alrz commented on Wed Dec 30 2015

While I don’t see why None is defined as a property and not just a static readonly variable in the first place, still structs cannot be considered as a constant for some reason, even only for default(T) case. Also, CLR verifier go to great pains to guarantee this for readonly fields and yet they’re not reliable. My point is, a feature set around totally immutable objects (#7626) is required to facilitate these use cases (having a const of a struct). Otherwise, I can’t see how this could be done.

@jskeet commented on Wed Dec 30 2015

Why could it not be done? It works for default arguments. Why could the compiler not do the same thing elsewhere? I think I’d like the compiler team to comment on feasibility… But the community is in a good position to discuss desirability.

@alrz commented on Wed Dec 30 2015

It works for default arguments.

I think the proposal actually suggests to make it works for default arguments, meaning that it currently doesn’t, right? Or I’m missing something.

Anyway, I think this is the reason behind it:

// OK, compile-time constant, and you cannot define parameterless ctor for structs
void F(Struct s = new Struct())

// OK, same as above
void F(Struct s = default(Struct))

// Error, Default parameter value for 's' must be a compile-time constant
void F(Struct s = new Struct(1))

Now, imagine this for a static readonly field,

// Error, because 'None' could be anything, like 'new Struct()' or 'new Struct(1)'
void F(Struct s = Struct.None)

Are you suggesting that Struct.None becomes a compile-time constant, or we should relax optional arguments to accept non-compile-time constants? Either way, we have to ensure the immutability of the underlying type. This works for int etc because they are inherently immutable and the compiler knows it. But for non-simple value types that is not the case and we don’t have any notation to make them so.

@jskeet commented on Thu Dec 31 2015

No, default(T) already works for default arguments. It’s fine to have

void F(CancellationToken cancellationToken = default(CancellationToken))

This proposal would allow named constants with the value of default(T), which would be inlined at the usage site in the same was as other constants are.

I’m not proposing that any static readonly field becomes usable as a constant expression - just default(T). So in other words, this would be valid:

public struct Foo { ... }

public class Bar
    public const Foo EmptyFoo = default(Foo);

Then any time Bar.EmptyFoo is used (whether in the same assembly or not), the compiler inserts the appropriate IL as if the expression were actualy default(Foo). At that point it doesn’t matter whether Foo is immutable or not, as far as I can see. Why do you think it does?

@alrz commented on Thu Dec 31 2015

@jskeet I was talking about your example in the original post all this time, while you meant something like public const CancellationToken Never = default(CancellationToken);. Phew.

@jskeet commented on Thu Dec 31 2015

@alrz: Yes, the example in the original post is how nice it could have been if this feature had been introduced from the start. I explicitly specified that it’s too late for CancellationToken.None… do you now see what I’ve been getting at?

One wrinkle I hadn’t previously considered: we’d need to prohibit the use of user-defined operators for non-built-in types for constants (whereas we need to continue to allow them for decimal constants).

Updated 25/03/2017 00:08

Proposal: Pattern-based exception-handling


@alrz commented on Sun Nov 15 2015

Since catch already has an optional “exception filter”, I think it would be reasonable to use patterns also in catch clauses.

specific-catch-clause:catch ( complex-pattern ) exception-filter<sub>opt</sub> block

All patterns must be pattern compatible with the type Exception.

try { ... }
catch (WebException { Status is WebExceptionStatus.Timeout } ex) { ... }

Instead of

try { ... }
catch (WebException ex) when (ex.Status == WebExceptionStatus.Timeout) { ... }

If you declare exceptions using record syntax, then you will be able to use recursive-pattern:

class FooException(Status status) : Exception();
catch (FooException(Status.Timeout)) { ... }

With OR patterns (#6235) one should be able to check for multiple exceptions without exception filters.

@stepanbenes commented on Sun Nov 15 2015

@alrz commented on Sun Nov 15 2015

@stepanbenes Yeah I know.

@gafter commented on Mon Nov 16 2015

As you have currently specified them, OR patterns require pattern variables to be of the same type on both sides of the disjunction, so using them to catch multiple exception types is not easy.

@alrz commented on Mon Nov 16 2015

@gafter I don’t know if OR patterns could use the “most common type” rule? (that would be really nice, by the way) However if they could, still, you have to repeat the pattern variable for each exception like catch(E1 ex or E2 ex), and the fact that the identifier in “type patterns” isn’t optional makes this not that friendly. As an alternative approach, I think “type intersections” as a general feature would be useful also in this context, catch(E1 & E2) using the regular specific-catch-clause, but in that case if you want to declare a variable, a pair of parentheses would be needed I presume e.g. catch((E1 & E2) ex) which is not that bad. I don’t know which approach would be more adoptable to make this possible.

@gafter commented on Mon Nov 16 2015

@alrz if we used the existing dominant type rules, one of the two types would be required to be a subtype of the other. I do not think you would like that.

I think you meant disjunctive, not conjunctive types, as it is only one or the other at any given time. When I did it for Java I used |.

@gafter commented on Mon Nov 16 2015

When I specified and implemented this for Java, you would write

try {
} catch (NullReferenceException | InvalidCastException ex) {

The type specified in the catch clause can be thought of as a disjunction type. The compiler would emit code to catch precisely those two exception types, and the type of the variable ex within the catch block is a type whose member set is precisely those members that are common to both types.

That works in Java because - The “common base type” algorithm is capable of selecting a base type, and - The idea of compiling by erasure is well established in the language., and - We make the exception parameter readonly so as to avoid specifying too much about the meaning of disjunction types

C#’s “common type” algorithm (as currently specified) never produces a type that is not one of the input types.

Updated 25/03/2017 00:07

Support compiled xml doc <code> tag


@AArnott commented on Mon Dec 21 2015

One reason examples don’t get written in xml doc comments is they’re hard to write and virtually impossible to maintain. Intellisense doesn’t help write them, and refactoring and changing code later doesn’t keep them current.

Suppose <code> snippets were recognized by the compiler and treated as regular code, including compile errors and refactoring, but just not emitted into the assembly at compile time?

There would of course be plenty of details to work out, such as what the context would be of the sample code (is it a statement block, or can it define types and members, etc.). But the result would be awesome pretty much wherever it lands.

In this snippet, for instance, the first example is correct but the second example would produce a compile error:

/// <summary>Adds two numbers</summary>
/// <param name="a">The first number</param>
/// <param name="b">The second number</param>
/// <example>
/// <code>
/// int five = Add(3, 2);
/// </code>
/// </example>
/// <example>
/// <code>
/// float five = Add(3, 2);
/// </code>
/// </example>
public static int Add(int a, int b) => a + b;

And syntax highlighting and autocomplete and the rest of the goodness would exist in those examples.

@m0sa commented on Mon Dec 21 2015

I like the idea. But according to this MSDN article, this should rather be done for <code> (and <c>) tags:

/// <summary>
/// The GetZero method.
/// </summary>
/// <example> 
/// This sample shows how to call the <see cref="GetZero"/> method.
/// <code>
/// class TestClass 
/// {
///     static int Main() 
///     {
///         return GetZero();
///     }
/// }
/// </code>
/// </example>
public static int GetZero()
    return 0;

@SergeyZhurikhin commented on Tue Dec 22 2015

@AArnott Correct title: <example> to <code>.

Furthermore, I agree with the @AArnott. I often write examples and at the moment, this work looks very strange. I’m writing a example of a free plot file using intellisense, and then insert in xml comment while doing shamanic dancing when copying - just when copying /// signs do not appear.

@alrz commented on Tue Dec 22 2015


@daveaglick commented on Tue Dec 22 2015


Also agree with others that this should apply to <code> and not <example>. I don’t think there would be much value in supporting <c> since those are usually just short snippets devoid of context.

We’d also need a way to turn it off on a per-element basis. Perhaps a new compile attribute:

/// <summary>
/// The GetZero method.
/// </summary>
/// <example> 
/// This sample shows how to call the <see cref="GetZero"/> method.
/// <code compile="false">
/// class TestClass 
/// {
///     static int Main() 
///     {
///         return GetZero();
///     }
/// }
/// </code>
/// </example>
public static int GetZero()
    return 0;

@AArnott commented on Tue Dec 22 2015

Thanks for the correction on using <code> instead of <example>. I’ve made the proposed change.

@davkean commented on Tue Dec 22 2015

@gafter What do you want this under? Language Design?

@sharwell commented on Tue Dec 22 2015

SHFB expands on support for <code> elements, allowing placement of content in other files. - Example usage - Example output

Even if C# support worked per this feature request, it would be difficult to make it support other languages without going to route described above.

Updated 25/03/2017 00:07

[Proposal] extend language integrated query clauses with "apply" clause


@paulomorgado commented on Wed Jun 17 2015

Every now and then there’s a new request for a new query clauses (C#, Visual Basic).

Sometimes it’s to add to C# some query clause that Visual Basic already has (Aggregate, Distinct, Skip, Skip While, Take, Take While), or materialization of the query (ToArray, ToList, ToDictionary or ToLookup) or soe other selection or aggregation (First, FirstOrDefault, Last, LastOrDefault, Single, SingleOrDefault, Count, etc.).

There are just too many and chances are that more will come.

How about extending the query language with an apply clause?

This apply clause would be composed of an apply keyword followed by the method name (instance or extension like already happens to Select or Where) and any other parameters.

Then it would be possible to write something like this:

from c in customers
apply Distinct c.CountryID
select c.Country.Name

Or this:

from o in customer.Orders
select o
apply FirstOrDefault

Or this:

from c in customers
select c
apply ToDictionary c.ID

I don’t know if I like this or not, but looks to me a lot more consistent and all-encompassing than individual proposals.

@svick commented on Thu Jun 18 2015

For the record,the previous proposals on GitHub are #3486 and #100 (and I’m sure there were some discussions on CodePlex too).

@svick commented on Thu Jun 18 2015

I would really like something like this, since it’s completely general. But I think the syntax needs to be fleshed out much more. Some questions: - How do you apply methods with multiple parameters (like the two-selector overload of ToDictionary())?

The variants consistent with this proposal are apply ToDictionary c.ID c.Name and apply ToDictionary c.ID, c.Name, but personally I don’t like either. Instead, I would prefer call-like syntax for all methods: apply ToDictionary(c.ID, c.Name) and e.g. apply First(). - When no range variable is mentioned, how do you differentiate between normal parameters (e.g. apply Take(10)) and lambda parameters (e.g. the second parameter of apply ToDictionary(c.ID, 0))?

F# LINQ (which already has similar kind of extensibility) uses attributes for this. Another option would be to use the variant that compiles, while preferring one of them when both compile (though that might easily lead to confusing error messages). - How do you differentiate between methods that maintain range variables and so can be used in the middle of a query (like Distinct) and those that don’t and so have to be used after the final select (like First)?

F# LINQ again uses attributes for this. And another option would be to just assume the user is right, but I think that would require lots of care to make the error messages understandable (instead of something like “Could not find an implementation of the query pattern for source type ‘int’. ‘Select’ not found.” for the query from i in list apply First select 2*i).

@gordanr commented on Thu Jun 18 2015

‘Apply’ could be very good starting point for discussion.

Call-like syntax is maybe more suitable for C# language.

from c in customers
select c
apply ToDictionary c.ID

from c in customers
select c
apply ToDictionary(c.ID)

When there are not parameters, is it good idea to allow both forms?

from o in customer.Orders
select o
apply FirstOrDefault

from o in customer.Orders
select o
apply FirstOrDefault()

Some methods are natural after the final select.

... select expression apply ToList/ToArray/ToDictionary/...

Some methods can be after the final select.

... select expression apply FirstOrDefault/First/Single/...

But also can be proposed in infix notation. FirstOrDefault/First/Single/... expression

Some methods are maybe more natural in infix notation (as in sql). Distinct expression

But also can be written on the end expression apply Distinct

@paulomorgado commented on Thu Jun 18 2015

Not all methods of Enumerable are available through LINQ clauses. So, for the lack of better syntax, I intentionally left out those not translatable to this syntax.

I intentionally did not use a call-like syntax because that’s how all existing clauses were defined.

All the rules that apply now and to method invocation style would still apply. The fact that you don’t have a TResult Select<TSource, TResult>(TSource source, Func<TSource, TResult> selector) where source and result are not enumerables doesn’t mean you can’t have. In fact, in the last few years, I’ve seen Bart de Smet twisting LINQ in very interesting ways.

@gordanr commented on Sat Jun 20 2015

So, for the lack of better syntax, I intentionally left out those not translatable to this syntax.

Can we make a minimum subset of methods that are suitable for implementation?

… select expression apply ToList … select expression apply ToArray … select expression apply ???

Is it good idea for the first time to avoid methods with parameters?

@gordanr commented on Sat Jun 20 2015


Can you explain this example.

from c in customers
apply Distinct c.CountryID
select c.Country.Name

I don’t understand how this query can be translated into pure methods chain. Does it mean

(from c in customers select c.Country.Name).Distinct()

Maybe you thought

from c in customers
select c.Country.Name 
apply Distinct

When we talk about Distinct we should have in mind how works default comparator.

@gordanr commented on Sat Jun 20 2015

More precisely, I don’t understand how to use ‘apply’ before ‘select expression’.

Now we can write ‘where’ more times between first ‘from’ and the last ‘select’. For example, I usually write ‘let’ between two ‘where’.

from ...
where condition
where condition
where condition
select expression

Do we think about this form of ‘apply’?

from ...
apply Method
apply Method
apply Method
select expression
apply Method

Can we make a list of suitable methods in this case? Is it good approach?

@paulomorgado commented on Sat Jun 20 2015


So, for the lack of better syntax, I intentionally left out those not translatable to this syntax.

Can we make a minimum subset of methods that are suitable for implementation?

… select expression apply ToList … select expression apply ToArray … select expression apply ???

Is it good idea for the first time to avoid methods with parameters?

The point of this proposal is to not be limited to any set of methods. And to allow methods with none or one parameter.

Can you explain this example.     from c in customers     apply Distinct c.CountryID     select c.Country.Name

I don’t understand how this query can be translated into pure methods chain. Does it mean     (from c in customers select c.Country.Name).Distinct()

Maybe you thought     from c in customers     select c.Country.Name     apply Distinct

When we talk about Distinct we should have in mind how works default comparator.


from c in customers
apply Distinct c.CountryID
select c.Country.Name

will translate into:

customers.Distinct(c => c.CountryID).Select(c => c.Country.Name)

On the other hand, this:

from c in customers
select c.Country.Name 
apply Distinct

will translate into:

customers.Select(c => c.Country.Name).Distinct()

More precisely, I don’t understand how to use ‘apply’ before ‘select expression’.

Now we can write ‘where’ more times between first ‘from’ and the last ‘select’. For example, I usually write ‘let’ between two ‘where’.     from ...     ...     where condition     ...     where condition     ...     where condition     ...     select expression

Do we think about this form of ‘apply’?

    from ...     ...     apply Method     ...     apply Method     ...     apply Method     ...     select expression     apply Method

Can we make a list of suitable methods in this case? Is it good approach?

apply is used as any other clause with the only difference that the first operand is a method.

You can write this:

from ...
where condition
where condition
where condition
select expression

as this:

from ...
apply Where condition
apply Where condition
apply Where condition
apply Select expression

if you want to.

@gordanr commented on Sat Jun 20 2015

Thank you. Very, very interesting. I will think more about that.

@gordanr commented on Sun Jun 21 2015

That was a key.

apply Where condition

Now, I understand better your proposal.

I intentionally did not use a call-like syntax because that’s how all existing clauses were defined.

Yes. Now when I understand I agree with you regarding parameters.

@gordanr commented on Sun Jun 21 2015

This is a piece of code from one of my old WinForms application.

List<int> awardsIds = panelAwards.Controls.OfType<CheckBox>()
                     .Where(c => c.Checked)
                     .Select(c => (int)c.Tag).ToList();

Now we can write

var awardIds = from c in panelAwards
               apply OfType<CheckBox>
               where c.Checked  // CheckBox has property Checked.
               select (int)c.Tag
               apply ToList;

Very nice.

@gordanr commented on Sun Jun 21 2015

Element operations have Immediate Execution. These methods are always on the end of query or subquery.

apply ElementAt n
apply ElementAtOrDefault n
apply First
apply FirstOrDefault 
apply Last
apply LastOrDefault
apply Single
apply SingleOrDefault
var x = from ... 
        where ...
        select expression 
        apply First;

Or in some form of possible infix notation. This notation can coexist with apply.

var x = from ... 
        where ...
        select First expression 
     // select first expression 

@gordanr commented on Sun Jun 21 2015

This group of my favourite methods have Immediate Execution. These methods are always on the end of query or subquery.

ToArray, ToDictionary, ToList, ToLookup

var r = from n in numbers
        where n % 2 == 0
        select n
        apply ToList;

var r = from n in numbers
        where n % 2 == 0
        apply ToList;

var r = from n in numbers
        where n % 2 == 0
        select n 
        as List;
Dictionary<int,Order> orders =  
    customers.SelectMany(c => c.Orders)
    .Where(o => o.OrderDate.Year == 2005).ToDictionary(o => o.OrderId);

var orders = from c in customers
             from o in c.Orders
             where o.OrderDate.Year == 2005
             apply ToDictionary o.OrderId

@gordanr commented on Sun Jun 21 2015

Methods All and Any have Immediate Execution. These methods are always on the end of query or subquery.

Here is one VB example.

Dim query = From pers In people 
            Where (Aggregate pt In pers.Pets Into All(pt.Age > 2)) 
            Select pers.Name

Dim query = From pers In people 
            Where (Aggregate pt In pers.Pets Into Any(pt.Age > 7)) 
            Select pers.Name

And some F#.

query {
    for student in db.Student do
    where (query { for courseSelection in db.CourseSelection do
                   exists (courseSelection.StudentID = student.StudentID) })
    select student

I am not convinced that it is good to introduce special form of syntax for this purpose which exists in VB.

var query = from pers in people 
            where (from pt In pers.Pets select pet apply Any pt.Age > 7) 
         // where (from pt In pers.Pets apply Any pt.Age > 7) 
            select pers.Name

@gordanr commented on Sun Jun 21 2015

These methods have Immediate Execution. These methods are always on the end of query or subquery.

Here is VB syntax.

Dim avg = Aggregate temp In temperatures Into Average()

Dim highTemps As Integer = Aggregate temp In temperatures Into Count(temp >= 80)

Dim numTemps As Long = Aggregate temp In temperatures Into LongCount()

Dim maxTemp = Aggregate temp In temperatures Into Max()

Dim minTemp = Aggregate temp In temperatures Into Min()

Dim orderTotal = Aggregate order In orders Into Sum(order.Amount)

And some F#

query {
    for student in db.Student do
    averageByNullable (Nullable.float student.Age)

query {
    for student in db.Student do
    averageBy (float student.StudentID)

query {
   for student in db.Student do
   sumBy student.StudentID

let student =
    query {
        for student in db.Student do
        maxBy student.StudentID

We can avoid special form of Aggregate syntax that exists in VB.

Dim customerMax = From cust In customers
                  Aggregate order In cust.Orders Into MaxOrder = Max(order.Amount)
                  Select cust.CompanyName, MaxOrder

var customerMax = from cust in customers
                  let maxOrder = from o cust.Orders select o.Amount apply Max
               // let maxOrder = from o cust.Orders apply Max o.Amount
                  select cust.CompanyName, MaxOrder

Some possible infix improvements that can coexist with apply.

var customerMax = from cust in customers
               // let maxOrder = from o cust.Orders select Max o.Amount // infix form
               // let maxOrder = from o cust.Orders select max o.Amount // infix form, deep integration
                  select cust.CompanyName, MaxOrder

@gordanr commented on Sun Jun 21 2015

Here is VB syntax.

Dim query = From word In words Skip 4

Dim query = From word In words Skip While word.Substring(0, 1) = "a" 

Dim query = From word In words Take 2

Dim query = From word In words Take While word.Length < 6

And some F#

 query {
    for number in data do
    skipWhile (number < 3)
    select student

C# Examples

var numbersSubset = numbers.Take(5).Skip(4);

from n in numbers
select n
apply Take 5
apply Skip 4

from n in numbers
select n
take 5 // possible syntax in the future?
skip 4
var remaining = numbers.SkipWhile(n => n < 9);

var remaining = from n in numbers
                apply SkipWhile n < 9
                select n

var remaining = from n in numbers
                skipWhile n < 9 // possible syntax in the future? deep integration?
                select n
var result = products.OrderByDescending(p => p.UnitPrice).Take(10);

var result = from p in products
             orderby p.UnitPrice descending
             apply Take 10;

var result = from p in products
             orderby p.UnitPrice descending
             take 10; // possible syntax in the future?

@gordanr commented on Sun Jun 21 2015

VB example

Dim distinctQuery = From grade In classGrades Select grade Distinct

C# example

IEnumerable<string> productCategories = products.Select(p => p.Category).Distinct();

var productCategories = from p in products
                        select p.Category
                        apply Distinct;

Or some possible alternatives.

var productCategories = from p in products
                        select p.Category Distinct;
                     // select p.Category distinct;
                     // select distinct p.Category;
                     // select Distinct p.Category;

@paulomorgado I am not sure that method Distinct has correct syntax.

customers.Distinct(c => c.CountryID).Select(c => c.Country.Name)

@gordanr commented on Mon Jun 22 2015

Some examples in C# and F#

char[] apple = { 'a', 'p', 'p', 'l', 'e' };
char[] reversed = apple.Reverse().ToArray();
from c in apple
select c
apply Reverse
apply ToArray;
from c in apple
select c
apply Reverse
as Array; // That's my proposal for materialization with 'as'.
let b =
    query {
        for student in db.Student do
        select student.Age
        contains 11
bool b = from student in db.Student
         select student.Age
         apply Contains 11

@paulomorgado commented on Sun Jun 21 2015


@paulomorgado I am not sure that method Distinct has correct syntax. customers.Distinct(c => c.CountryID).Select(c => c.Country.Name)

I was thinking of some of my extensions - LINQ: Enhancing Distinct With The SelectorEqualityComparer

@weitzhandler commented on Tue Dec 15 2015

I strongly vote for this one.

I never use LINQ language queries and instead my coding guidelines is avoid using them and always use the extension methods directly, and that’s because the ugly parentheses each query has to be wrapped in order to materialize it (ToArray, ToList, Sum, SingleOrDefault etc.).

Until this issue is addressed, language-build-in LINQ is merely useless. I really hope to see this implemented soon. Maybe to a more limited extent (avoiding the introduction of a gazillion new language keywords.

I’d say the syntax should provide an operator that expects a shortcutted extension method available for IEnumerable<TElement>, for instance:

//parents is Parent[]
var parents = from student in students
                     where student.Age < 18
                     select student.Parent
                     call ToArray()

//student is Student
var student = from st in students
                      call SingleOrDefault(st => st.Id == id);

Asynchronous methods should also be supported:

   var student = from st in students
                         call await SingleOrDefaultAsync(st => st.Id == id);

Maybe there should be a verbose LINQ fashioned way to pass the arguments and completely avoid the use of parenthesis, but I personally don’t see it as necessary.

Anyway this feature is crucial for the completeness of the LINQ syntax.

Some suggestions above proposed the ToList at the beginning of the query, but that’s a no go, since we want to be able to process it after the selection, and we don’t want to be limited to parameterless ex. methods only. What if we wanna call ToLookup with a key selector, bottom line we can’t tie it up to the language, we just need to find a way to call any Enumerable ex. methods on the current query state and treat its result as the final type of the query.

Updated 25/03/2017 22:52 3 Comments

Question about Travis iOS builds


I am trying to fix the failed tests on the ceres-solver package NeroBurner has posted. Most of the failures are just related to glog not compiling for a particular platform (I have disabled them and will post a pull request for it later). I am trying to fix 2 items that are related to ceres directly. One has taken me down a rabbit hole and I need some help on where/how it should be fixed.

This issue is on the iOS build ( and ceres is setup for building iOS so I didn’t want to just disable the test. The error occurs because ceres' cmake is checking to verify that the iOS version is supported. The issue comes because ceres is expecting cmake to be ran with a toolchain file ( before anything else (the toolchain file sets up some cmake vars that the main cmake file uses).

I have tracked as much as I could through hunter looking for a way to setup a toolchain file but nothing popped out at me considering the test builds already provide cmake with toolchain files. So I tracked all that back into polly and found that it has similar toolchain files as what ceres is using it just doesn’t setup the same vars. In fact in the polly “os/iphone.cmake” file

# Emulate OpenCV toolchain --
set(IOS YES)
# -- end

IOS is the same flag ceres is expecting from its toolchain file to build for iOS. Ceres' toolchain file uses xocdebuild to fill out the vars:

# Get the SDK version information.
execute_process(COMMAND xcodebuild -sdk ${CMAKE_OSX_SYSROOT} -version SDKVersion

so it should work pretty good and the vars seemed to be properly named. So on to the questions;

  1. Should polly adopt some of these variables from ceres and just ignore its own toolchain file
  2. Find a way to call ceres' toolchain file
  3. Or something else?
Updated 27/03/2017 04:14 7 Comments

Replace Native JS Code


I recently realized that only the elm-lang/core package is allowed to have native JS code. Currently elm-styled is using native JS code to inject the CSS. The problem is that I can’t publish elm-styled to the official package manager and users would have to install this package directly from GitHub with tools like elm-github-install. The last couple of days I thought how to remove the native code from elm-styled to finally publish the package. My current solutions are:

Monkey Patching Element.prototype

This would work similar to We would add a new attribute setter to the element prototype and everytime an element is created the styles would inject magically.


The user would have to inject a single script which monkey patches the elment.prototype.


  • The simplest solution for the users.
  • We could use sheet.insertRule() which is very fast.


  • We would have to monkey patch browser internals. (I don’t like that)



  1. Setting up the model to hold all of the rules. (similar to
  2. Creating a port module and injecting the styles with JavaScript on the other site. / Injecting the styles into a single style tag (probably pretty slow)
  3. Calling styled with an additional argument which is the Msg type to add styles to the model. ```elm type Msg = AddStyle Styled.Msg | OtherMsg

    styled AddStyle div [ padding zero ] ```


  • No monkey patching of browser internals.
  • We could use sheet.insertRule() which is very fast.


  • Much boilerplate which hurts the simplicity of elm-styled

I would love the get some feedback and suggestions for better solutions. Maybe someone with more elm experience has a great solution for us.

Updated 24/03/2017 19:35

Style Loader


Any interest in adding a css style loader to webpack by default? I believe the dependency is:

npm install –save-dev css-loader

With the most basic rule added to webpack.config.js :

        test: /\.css$/,
        use: [ 'style-loader', 'css-loader' ]
Updated 26/03/2017 20:25 2 Comments

Warn when readonly auto-prop is left uninitialized?


Am I missing something or is there truly no warning / suggestion if you leave a readonly property uninitialized? This is rarely what one wants, except maybe for simple value types.

using System;

namespace ConsoleApp1
    class Program
        static void Main(string[] args)
            var c = new C();

    class C
        public C()
            // if you forget to initialize P = "test";

        public string P { get; }  // there is no warning

Updated 27/03/2017 01:28 12 Comments

Proposal for a better "start-up screen" (aka Project Manager)


This sprouted a bit from the discussion about #8134. A better welcome screen could provide

  • list of all created/imported projects
  • a better “new project” screen, where we could provide options for a few basic templates (2d, 3d, GUI application, etc). This also because @reduz mentioned that in Godot 3, 3d games should have a default skybox.
  • temples/asset lib (maybe a better name could be found? online content?)
  • a quick-start help screen with links to the main docs, Q&A, video channels, etc
  • quick access to editor settings (language, hidpi, control scheme, etc)

I did a very quick change in the layout of the project manager screen to include a few of these changes vkossza

notice that I also added a “automatically start this project on start”. I found this option quite useful. When you are working on a serious project, you dont really start Godot for anything else, so its good to just jump right-in :)

Would be great to hear more options on this and get the discussion started!

Updated 27/03/2017 08:44 15 Comments

Should `get-file` and `delete-file` error if the file DNE?


get-file and delete-file just return silently if they try to act on a file that doesn’t exist. What should be the desired behavior in this case?

get-file should certainly error IMO. delete-file I’m not 100% sure about. The file they tried to delete isn’t there, so that’s good, but more likely than not the user has a typo and was trying to delete something else.

Updated 25/03/2017 01:49 5 Comments

Implement delete-commit in v1.4


delete-commit in 1.3 wasn’t very powerful as it only applied to open HEAD commits. 1.4 changes things enough that we should rethink from scratch how we want delete-commit to behave or even if we even want it all.

Some considerations: - Once a commit is closed, deleting it would leave downstream commits without provenance which would be nasty. If we allow this we’d need to delete all provenant (<– is there a better term for commits with this as provenance?) commits as well. - We currently do now allow list-file in v1.4 on open commits. This makes delete-file and delete-commit a bit unwieldy because I can’t see what files are actually there. Just finishing that commit and then correcting things in the next one isn’t a great solution because the bad commit would trigger downstream pipelines which fail in catastrophic ways (e.g. OOM or OOD because I forgot to do –split-file).

Updated 24/03/2017 22:31 1 Comments

Discussion: testing for multiple enum values


@danmosemsft commented on Thu Mar 23 2017

@jterry75 commented on Thu Mar 23 2017

I find myself always writing code like:

enum States

class Example
    public States State { get; }

class Program
    public static void Main(string[] args)
        var t = new Example();
        // ...
        if (t.State == States.On || t.State == States.InBetween) // <- Interested here.
             // Do something.

What if C# had an addition specifically for enum types to match as a language feature. C# var t = new Example() // ... if (t.State.Is(On, InBetween) { // Do Something }

I can imagine that this has a few helpful optimizations. For 1 we only need to actually call .State once which means that we know the multiple calls are side-effect-free but for 2 we can actually infer the enum type exactly so we dont need its qualifiers.

enum WowWeWriteExtreemlyDescriptiveEnumNames

var z = WowWeWriteExtremelyDescriptiveEnumNames.A;
// ...
if (z == WowWeWriteExtremelyDescriptiveEnumNames.A || 
    z == WowWeWriteExtremelyDescriptiveEnumNames.B)
    // Do Work

Goes to C# if (z.Is(A, B)) { // Do Work }

You could imagine for [Flags] enums we could match based on logical operation: ```C# [Flags] enum Test { None = 0x0, A = 0x1, B = 0x2, C = 0x4 }

if (z.Is(A, B || C)) { // Do work } ```


@danmosemsft commented on Thu Mar 23 2017

@jterry75 thanks for the contribution. I’ll move to what I think is the right repo

@Korporal commented on Thu Mar 23 2017

This can kind of be achieved now with a little fiddling around:

using System;
using System.Linq;

namespace Junk
    public enum Sample
    class Program
        static void Main(string[] args)
            Sample a = Sample.First;
            Sample b = Sample.Second;
            Sample c = Sample.Third;

            if (b.IsAny(Sample.Third, Sample.Second))

    public static class EnumExtensions
        public static bool IsAny<T>(this T Enum, params T[] Args) where T : struct, IComparable
            return Args.Where(a => Enum.Equals(a)).Any();

@benknoble commented on Thu Mar 23 2017

Note that the solution proposed by @Korporal isn’t completely type-safe: it can receive arguments conforming to the where clause (and should still operate as expected), not just enums. We could consider this a feature of the method rather than a bug, but it is notable that the method cannot guarantee T is an enum.

While I don’t know that the language needs a feature for this, it raises again the ability to say where T : enum and were there a feature like this, we could get better type-checking (guaranteeing all the parameters are parts of the enum, or things like that).

@jcouv commented on Thu Mar 23 2017

The C# language design has moved to a separate repo: Please move this discussion over there.

@jterry75 commented on Fri Mar 24 2017

@Korporal - I do that roughly today but as @benknoble pointed out its kinda a hack. With where T : enum this would certainly be possible but I find it odd that given that constraint we would then require every project to implement the extension method which will just be exactly what you wrote. Seems like the language could just provide this in the System namespace or something.

@jcouv - I dont know how to ‘move’ an issue but I appreciate the tip on the new repo. Can you assist with the move?

Updated 24/03/2017 22:11 5 Comments

Unify the Review Summary template between SRS and "free" reviews


SRS and “Free” reviews use a different review summary screen. SRS Review summary has a data table, whereas free reviews use a “skeuomorphic” style , presenting flashcards in a grid:


I think this is nicer overall and a better fit for mobile ux. On the other hand the question is what is important for users in this summary screen?


  1. First I would temporarily ditch the data table view (uiSelectTable component), and use the same php templating to output the card grid, with the SRS review summary data

  2. Then we replace this grid view with a Vuejs component. This component receives an array of objects, each object is a card with its meta data. The grid view is a parent component with a loop The card view is a child component that takes meta data for one card and formats it visually

  3. Finally, it is feasible to make this vue component have a toggle between “card” and “rows”. When in “row” format, the child component can switch its view to horizontal blocks for a table-like presentation. Something to consider here is paging. It might be easier to just return all the review session data and let client component handle the paging. For a simple first implementation just skip paging and display all the cards (typically user reviews 100 card a day or less anyway).

These steps can be made into separate issues.


This view is not specific to review summary, and could be used for confirmation screens in Manage templates (eg. confirm cards being added & removed).

Updated 26/03/2017 11:25

Sponsorship for Cookiecutter


Opening this as a discussion area for how to help fund efforts for cookiecutter to improve and tackle things like #848.

Not sure how much “sponsorship” is needed, but here is a site for allowing the crowd to contribute as I know my organization would be interested in helping… (I have no affiliation with this site, but just found it while searching for how other projects handle this).

Updated 24/03/2017 23:35 1 Comments

Add a "first start" dialog to the editor to improve user experience


Software like IntelliJ features a “first start” dialog where the user can choose essential settings such as language, editor layout, control scheme, and more.

It was also suggested that the user can also change the DPI mode and font size from there (in case automatic detection failed or is not suited to the display).

If the user changes at least one setting that requires an editor restart there, the user should be prompted to restart the editor (with a button, for example).

Updated 24/03/2017 17:09

[Design] Syntax to represent module namespace object in dynamic import


Another half of dynamic import #14774 is to enable users to be able to describe the shape of the module namespace object return by dynamic import. This is needed because

  • TypeScript currently can only resolve module if the specifier is a string literal (recall syntax of dynamic import import(specifier)) and Promise<any> will be returned…Therefore we need a way for users to be able to specify the module shape in type argument of the Promise and escape noImplicitAny (Note if dynamic import is used as expression statement. we won’t issue noImplicitAny)

  • When we emit declaration file, we need a syntax to indicate that this is a Promise of some module. Currently compiler will give an error as reference external module is not the same as the module of containing file (or lack of one)

🚲 🏠 There are two ares which need to be discuss:

  1. Syntax to bring in the dynamic loaded module so compiler can load during compile time
  2. Syntax to specifier the return value of dynamic load (e.g just as a cast, or type argument to dynamic import etc.)

Proposes for (1) - bring in the module (This assume that we will use casting syntax but it doesn’t have)

  1. use namespace import, (this will get elided anyway if just used as type) ts import * as MyModule1 from "./MyModule1" import * as MyModule12 from "./MyModule2" var p = import(somecondition ? "./MyModule1" : "./MyModule2") as Promise<typeof MyModule1 | typeof MyModule2>;

  2. introduce new keyword ,moduleof….when collecting module reference we will also collect from moduleof as well ts var p = import(somecondition ? "./MyModule1" : "./MyModule2") as Promise<moduleof MyModule1 | moduleof MyModule2>;

Proposes for (2) - indicate the return type

  1. Since dynamic import is parsed as call expression, we can reused the mechanism of specifying type argument ts var p = import<Promise<moduleof MyModule1 | moduleof MyModule2>>(somecondition ? "./MyModule1" : "./MyModule2");

  2. Contextual Type ts var p: <Promise<moduleof MyModule1 | moduleof MyModule2>> = import(somecondition ? "./MyModule1" : "./MyModule2");

  3. Just use casting

  4. allow all of the above.

Updated 27/03/2017 12:25 3 Comments

Overhaul logic for generating Calc specifications in



this is well suited for a more OOP approach: a CalcSpec abstract base class, with one implementation of it for each spec that gets passed to Calc. Each implementation would specify how it is to be handled, e.g. where to find its ‘default’ or ‘all’ values or what data type to be expected (e.g. dtype_out_time is a tuple that gets permuted over within a Calc, not across Calcs). This would make it easier to, for example, support ‘default’ and ‘all’ values for every spec.

I’m now further convinced this is the right approach. Having a CalcSpec class would enable us to cleanly handle:

  • What is acceptable user input for that category (e.g. ‘all’, ‘default’, or a sequence of Models for models, vs. None or ‘vert_int’, or ‘vert_av’ for output_vertical_reductions)
  • Logic for handling each accepted input type (e.g. parsing ‘all’ and ‘default’ using a parent object, adding an extra list around regions and output_time_regional_reductions so that they don’t generate extra Calcs

Eventually I think we could use this to do away with the core/aux spec distinction, and it would help more broadly in our goal to relax the rigid proj/model/run hierarchy, c.f. #111

Updated 24/03/2017 16:04

Propose: Add "where" support for enum in generic type defintions


Currently there’s no way to restrict a generic type argument to be an enum. One “workaround” is to restrict the type to a struct perhaps adding IComparable etc to the “where” clause.

So this is a proposal to add this feature to C#.

This was inspired by this related issue:

This has come up numerous time in certain kinds of designs, it isn’t a frequent issue but does arise and does seem like a gap in the language.


Updated 24/03/2017 21:26 2 Comments

Data output format (cloud)


How do we store output data (website contents) in the cloud storage?

STRUCTURE - option 1: a number of WET files each containing many websites (with headers) Pros: smaller amount of files; Cons: not flexible - option 2: a single file for each relevant website (with headers) Pros: very flexible, no download overhead; Cons: many files - option 3: a single file for each relevant website (without headers) Pros: least size required, no download overhead; Cons: many files

I would prefer option 2. To reduce the number of files per directory, we can organize them as follows: root/<WET_file_name>/website_<i>.wet where i is the index of the entry in the WET file

COMPRESSION - option 1: files are stored compressed (e.g. as .gz) Pros: less size; Cons: more complex - option 2: files are stored uncompressed Pros: simpler; Cons: faster download/upload

Updated 24/03/2017 18:29 4 Comments

JavaScript Linters



Started Contributors Commits Issues PRs Plugins?
2010 230 1,929 337 68


Started Contributors Commits Issues PRs Plugins?
2013 486 5,698 209 41


Started Contributors Commits Issues PRs Plugins?
2016 68 645 91 10

Importance of Plugins

I can imagine some use-cases for a linter that accepts plugins; such as linting the @tag code comments that bit-docs supports.

At any rate, I feel this is an important feature to have, and we could probably start making use of it right away.

Both ESLint and Prettier support plugins, but JSHint does not. For this reason, I think JSHint should be taken out of the running; JSHint hasn’t discussed adding a plugin system since 2015.

See ESLint’s “Working with Plugins”.

Rewriting Files vs. Giving a Warning

Prettier actually rewrites the files that it lints. It takes the JavaScript in the file, parses it into AST, and spits it out.

This in contrast to how ESLint simply gives a warning and leaves the fixing up to you.

Supposedly the AST strategy of Prettier can catch formatting/style “errors”, or inconsistencies that ESLint would miss. The example they give is:

foo({ num: 3 },
  1, 2)

  { num: 3 },
  1, 2)

  { num: 3 },

Saying this would be missed by other linters, while Prettier would clean it up.


Prettier, started in 2016, comes with a bold warning that it is “beta” and that you should always commit your files before linting them with Prettier, because it rewrites the file and might mess it up:

To format a file in-place, use –write. While this is in beta you should probably commit your code before doing that.

ESLint seems to be the better option as long as Prettier works the way it does, and as long as it comes with that warning. Prettier also warns that the API will probably change, so you will have to reimplement it a couple times before it stabilizes:

Warning: This is a beta, and the format may change over time. If you aren’t OK with the format changing, wait for a more stable version.

You can read more about the ESLint philosophy here.

You can read more about the Prettier philosophy here.


Let’s discuss:

  1. Do you think we would ever use Prettier in the future, once it finally stabilizes?
  2. Is it worth switching from JSHint to ESLint, for the main benefit of having plugins?
  3. Can you think of any good uses for the plugin system?
  4. Can you think of any other good reasons in support of switching?

Next Steps

Once we’ve reached a decision, we should re-evaluate what standardized rules we want to put in place for consistent coding.

Updated 24/03/2017 19:15

faster rental


When no edit field is enabled and the user scan an id the gui can automatically call SIdCheck or BIdCheck to find out in which edit field the id has to be written. Would this be like Nucks image of scanning fast the book and the student id and have nothing to click ?

Updated 24/03/2017 13:04

Extensible syntax for function space annotations


Currently the only ways we have to annotate a function space is with braces ( ) / {} / {{ }} or with varying numbers of dots (for irrelevant and non-strict). The problem is that there are lots of things people want to add annotations for: linearity, polarity, parametricity, strictness, custom meta solvers, etc.

It would be nice if we could come up with an extensible syntax that would let us experiment with various annotations without coming up with an ad hoc syntax and fighting the Happy parser every time.

My initial offer is using a bar: (annotations | x : A) → B. Multiple annotations could be separated by ; as in (pos; linear 4 | x : A) → B. We would then add annotations corresponding to all the current ways to annotate function spaces:

  • {x : A} → B = (implicit | x : A) → B
  • {{x : A}} → B = (instance | x : A) → B
  • .(x : A) → B = (irrelevant | x : A) → B
  • ..(x : A) → B = (what-ever-this-actually-means | x : A) → B
Updated 24/03/2017 23:14 8 Comments

Coding Standards


As part of our efforts to better the quality of our code and final product we want to define concrete standards to follow with future code we write, and that should be applied to previous code written.

Here are a few ways I think we can achieve this.


I’ve written a small wiki page to explain what SonarQube is. As you can see when you enter the Issues page there are a lot of issues with our code. We need to decide on exactly which issues (types of issues?) we want to address both for the existing code and for the future. Most fixes are very easy and quick but because of the amount of fixes it will take a while to get to all of them, so we need to decide whether or not we want to address all of them or just some.



We have been writing tests so far, but our coverage is far from good. We should decide on a minimum coverage rate we want to achieve. Bare in mind this coverage includes the gui code which will both be replaced soon (hopefully so something less bulky) and is less testable thus lowering our coverage percentage. I think we should settle on 60-70% coverage as out goal. obviously the more the better.


So far a lot of our unit tests were written with high dependency on things such as GUI, or external database. This both makes the automatic run of these tests more complex, more time consuming, and less relevant in cases we change implementation (chaging gui from swing to javafx, or moving to a non parse server) We should decide on some standard for testing to create tests that are less coupled. @ArthurSap introduced DI and Mocking last semester and we should make it a requirement for tests.


So far we have focused on unit testing. While it is important it isn’t the only tests we need. We should split tests into a few categories, for example - - Unit tests - Integration tests - Performance tests? - Any other required category…

This would make our tests more robust, in addition to allowing some tests to be automatic and require DI and mocking (unit tests should be such) and run with Jenkins, while some tests can be manual and require human interaction with the gui, being ignored by Jenkins.

Junit supports splitting tests into categories which will allow it to work seamlessly with Jenkins.

@koralchapnik @Kolikant @yaelAmitay @ArthurSap Reply with your thoughts on the points I made, we should settle on concrete standards by the end of the sprint.

Updated 27/03/2017 08:50 3 Comments

Choosing a logging framework


In order to be better able to analyze error and bugs, we want to add some logging framework to log the flow of the code

After some research I think the following solution will work : - Logback seems like a good logging implementation for java - SLF4j is a popular logging facade for java. A logging facade allows us to wrap the use of a logger with a generic interface, which will allow us to switch loggers seamlessly if the need for it ever arises. - graylog is a simple open source server that can put together logs from all users of the application, thus allowing us to view logs more conveniently (with the graphic viewer) and follow up on common crash that happen to multiple users.

@Kolikant @koralchapnik @ArthurSap @yaelAmitay What do you think?

Updated 25/03/2017 17:06 5 Comments

LXC Assigning same cores in single container multiple times.

  • Distributor ID: Ubuntu
  • Description: Ubuntu 16.04 LTS
  • Release: 16.04
  • Codename: xenial
  • The output of “lxc info” or if that fails: ```

    lxc info

    config: core.https_address: core.trust_password: true api_extensions:

  • storage_zfs_remove_snapshots
  • container_host_shutdown_timeout
  • container_syscall_filtering
  • auth_pki
  • container_last_used_at
  • etag
  • patch
  • usb_devices
  • https_allowed_credentials
  • image_compression_algorithm
  • directory_manipulation
  • container_cpu_time
  • storage_zfs_use_refquota
  • storage_lvm_mount_options
  • network
  • profile_usedby
  • container_push
  • container_exec_recording
  • certificate_update
  • container_exec_signal_handling
  • gpu_devices
  • container_image_properties
  • migration_progress
  • id_map
  • network_firewall_filtering
  • network_routes
  • storage
  • file_delete
  • file_append
  • network_dhcp_expiry api_status: stable api_version: “1.0” auth: trusted public: false environment: addresses:
    • architectures:
    • x86_64
    • i686 certificate: | —–BEGIN CERTIFICATE—– MIIGATCCA+mgAwIBAgIQSpECGa64gzXQLPeODs+o+DANBgkqhkiG9w0BAQsFADBE MRwwGgYDVQQKExNsaW51eGNvbnRhaW5lcnMub3JnMSQwIgYDVQQDDBtyb290QGJi LWJsci1wcm9kLWNvbXB1dGUtMDMwHhcNMTcwMTEzMDkyNTI4WhcNMjcwMTExMDky NTI4WjBEMRwwGgYDVQQKExNsaW51eGNvbnRhaW5lcnMub3JnMSQwIgYDVQQDDBty b290QGJiLWJsci1wcm9kLWNvbXB1dGUtMDMwggIiMA0GCSqGSIb3DQEBAQUAA4IC DwAwggIKAoICAQDaSNSbODu2Bl1zz/hpjFbhvJXQM7hDPF9MO194Ybm/RiPns4Zb VzVhfziUuDUmJxJxrTSGO8i2f55RyXg280PX3jDka4cNCMI4vTTjGUWvwoVDpjeK UUU0DAe0kTCPgp1lKDmq3u/nmboGGFdRKkqMTxmCsNaxkhxryoYWoCCRs1k/+dLv cjfzHR+rsaXax0z1HlgXOCamjsUkId/XOTWi2o0HeMWbHRRV+BmxGfUBvfSS2LGp jd8T5aTReujNOxYwi3Qo3H0mqvcerIaoSg4JMXlvzvb2uGGjNrl22DYoXgXCx7RU MhU8Jg422Vwq0iUV+vzGjJDKyWszAANtvsep6YU2EPxPnM0idl/37zPtI9h8EiEa vfp76RDjjJN0Nf6Z1hmJj13gIa3+6jowEoI6GwT1Lpa5JmNbTgRdxWEiwsJm4Eo3 ynsFDmZLa6kZRWsoveE3vw1BWbKwk0iqQunP3u1jTDohYNhKtLPcwfxqZwrrTWiE 5liSOa9bW93Yrwz/9Cw/dT/0XN7XQV1lcS8PyQRmJqJlvKFu191RgI6eMFe1wBcd at00eUvV4uvgBbGCwkT/EF4KuwSzE94FxN5WrxeiI1j3OSS6/LEWPSEOvvuDDtYU ZnfH/9jm/TLQUg3GYSqMCyQs5CvrfDfThv4wWVAhfUbMpELkn502ibYfNwIDAQAB o4HuMIHrMA4GA1UdDwEB/wQEAwIFoDATBgNVHSUEDDAKBggrBgEFBQcDATAMBgNV HRMBAf8EAjAAMIG1BgNVHREEga0wgaqCFmJiLWJsci1wcm9kLWNvbXB1dGUtMDOC DTEwLjEuMjQ4LjMvMjSCG2ZlODA6OjI2NmU6OTZmZjpmZTBlOmIzMC82NIIcZmU4 MDo6YTIzNjo5ZmZmOmZlYjY6OGViNC82NIIcZmU4MDo6YTIzNjo5ZmZmOmZlYjY6 OGViNi82NIIcZmU4MDo6Nzg5ODo0N2ZmOmZlNzA6YTVmNi82NIIKZmU4MDo6MS82 NDANBgkqhkiG9w0BAQsFAAOCAgEAxh4JIMr4XhxXfGnuZVMs+shA12e1rgyM8jBm xRBXB4YmHeapQ/TTFCbEL4xj4501TbpnDqceSkAdliysblZb3Gkjo1vHInWNaMw5 OMcWzFj+fK05ZN6WrgrpjOBsfNsUjzGQM7O5xCoB7zV7ajus1YDmd+rwrbq+KXom aso2NMaBB7oDYO/yqVXD4mHRMrD7D3DZQiuoi/piKE2IFKBiL6BahCyWXVwISG06 Icy421/szbwzaoiDQUYLXD5iNhiGs1SPihjq/bqy+2MXyZ/JixLbOo1BoQ/5KBrk 4bq9DxyPgM+dDYwaWHIm0O8Noyt6GwNHX/T4DC2iHmPEq2lLdbBJ3x2evjFMWVLp 8GTAgiOxIDUnOQiT5pWhrh+Qd8sJHXkuk1cDdQZ8P+lj8DwL01Uh4Lh85tFXg6zM 2HK3I8BSqrChbP6YRwsxafr3rs0pm0UqlLMFTPDfxKsWLFI/C+AYIqrRf3iHM1wC dHTLdkF0MCDUBsQIU/bVQpy7tgNiqrGPhN/F2OeOjLj+PhdQhgr7HvDuQ/vIAf/j oSvHd3YUwJplEW4c67Q5Dkmnh1821ROnQqYHNvpdRo2iEWze+3iZlOIykypRBu5V t7iuylU= —–END CERTIFICATE—– certificate_fingerprint: c5d6625d2184aaedf6e6ed966e3465a9f2d8b2cd7f1733ac330a2b4b0702 driver: lxc driver_version: 2.0.5 kernel: Linux kernel_architecture: x86_64 kernel_version: 4.4.0-59-generic server: lxd server_pid: 186992 server_version: 2.9.3 storage: zfs storage_version:

We create a lot of containers on a high-end host machine.
this is an example of one of such container.
following is container config:

lxc config show B059APP8307220

architecture: x86_64 config: boot.autostart: “1” image.architecture: x86_64 image.description: Ubuntu 16.04 LTS server (20160907.1) image.os: ubuntu image.release: xenial limits.cpu: “2” limits.memory: 8GB volatile.base_image: 86b8394ca69792791a095894863f468e2b999975fa92c2b0bb09a855b6189941 volatile.eth0.hwaddr: 00:16:3e:bd:c4:96 volatile.idmap.base: “0” ‘[{“Isuid”:true,“Isgid”:false,“Hostid”:100000,“Nsid”:0,“Maprange”:65536},{“Isuid”:false,“Isgid”:true,“Hostid”:100000,“Nsid”:0,“Maprange”:65536}]’ volatile.last_state.idmap: ‘[{“Isuid”:true,“Isgid”:false,“Hostid”:100000,“Nsid”:0,“Maprange”:65536},{“Isuid”:false,“Isgid”:true,“Hostid”:100000,“Nsid”:0,“Maprange”:65536}]’ volatile.last_state.power: RUNNING devices: eth0: host_name: B059APP8307220 name: eth0 nictype: bridged parent: br0 type: nic root: path: / pool: lxd size: 2GB type: disk shared_log_1: path: /DAS source: /gluster-compute/DAS type: disk shared_log_2: path: /LogBackups source: /LogBackups type: disk ephemeral: false profiles: - default ```

You can see we have assigned 2 cpu cores to the container. The Problem is when inside the container we do ;

root@B059APP8307220:~# cat /proc/cpuinfo
processor   : 0
vendor_id   : GenuineIntel
cpu family  : 6
model       : 79
model name  : Intel(R) Xeon(R) CPU E5-2683 v4 @ 2.10GHz
stepping    : 1
microcode   : 0xb00001e
cpu MHz     : 2793.902
cache size  : 40960 KB
physical id : 0
siblings    : 32
core id     : 1
cpu cores   : 16
apicid      : 3
initial apicid  : 3
fpu     : yes
fpu_exception   : yes
cpuid level : 20
wp      : yes
flags       : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb intel_pt tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdseed adx smap xsaveopt cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts
bugs        :
bogomips    : 4200.02
clflush size    : 64
cache_alignment : 64
address sizes   : 46 bits physical, 48 bits virtual
power management:

processor   : 1
vendor_id   : GenuineIntel
cpu family  : 6
model       : 79
model name  : Intel(R) Xeon(R) CPU E5-2683 v4 @ 2.10GHz
stepping    : 1
microcode   : 0xb00001e
cpu MHz     : 1266.890
cache size  : 40960 KB
physical id : 1
siblings    : 32
core id     : 1
cpu cores   : 16
apicid      : 35
initial apicid  : 35
fpu     : yes
fpu_exception   : yes
cpuid level : 20
wp      : yes
flags       : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb intel_pt tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdseed adx smap xsaveopt cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts
bugs        :
bogomips    : 4201.26
clflush size    : 64
cache_alignment : 64
address sizes   : 46 bits physical, 48 bits virtual
power management:

You can see that the core id of both the processor is same.

root@B059APP8307220:~# cat /proc/cpuinfo | grep 'core id'
core id     : 1
core id     : 1

Now this does not happen all the time but it does happen some times.

My question is, is this normal? or is this a bug? cause when it happens the performance of the container downgrades too much.

Updated 24/03/2017 19:18 7 Comments

Proposal: Allow null-coalescing `??` operator on value types


Redesignating the null-coalescing ?? operator to default-coalescing operator would allow to use it on value types as well.

For build in standard value types this would work like a charm: C# public struct Fraction { public double Counter { get; set; } public double Denominator { double field; get => field ?? 1; set => field = value ?? throw new ArgumentException("Denominator must not be zero (0.0)"); } } In this example a fraction is always valid, even if not initialized.

It is a shortcut for C# public struct Fraction { public double Counter { get; set; } public double Denominator { double field; get => field != 0 ? field : 1; set => field = value !=0 ? value : throw new ArgumentException("Denominator must not be zero (0.0)"); } }

By comparing with default(T), the null-calescing character stays intact but also allows for value types. Nullable value types have a default of null, if one wants to compare the value to 0 or false, she must use ValueType?.GetValueOrDefault() ?? ....

Known issue

Though it would work fine with Nullables, there is the ambiguity that one wants to check against the value, but accidentially checks again null. C# int x? = 0; var y = x ?? 1; Is it by purpose to check against x != null or is the actual intention to check against x != 0? Latter would have been correct with C# int x? = 0; var y = x.GetValueOrDefault() ?? 1; # (This issue is a bit related to #196).

Updated 25/03/2017 12:56 16 Comments

Code map?


I saw recently this: Would adding something similar be a benefit to ECC? It looks cool and can be potentially useful but I am not 100% sure. What are your thoughts?

Updated 24/03/2017 16:08 1 Comments



已搭建一个 密码 justforfun 音乐chinamobile_sdc 供大家玩耍 不保证安全 仅供测试 顺便帮忙测试下体验效果怎样。😀😀😀😀😀😀

Updated 27/03/2017 03:18 17 Comments

Optimization: beter images compression


This is a Feature Proposal

:tophat: Description

For feature proposals: * What is the use case that should be solved. The more detail you describe this in the easier it is to understand for us.

Google Page Speed says that we need to optimize the images with better compression to be faster, better and stronger.

  • If there is additional config how would it look

:clipboard: Additional Data

  • Decidim deployment where you found the issue: decidim barcelona
  • Browser & version:
  • Screenshot: imatge

  • Error messages:

  • URL to reproduce the error:
Updated 24/03/2017 11:15 5 Comments

Fork me on GitHub