Contribute to Open Source. Search issue labels to find the right project for you!

Water Quality Averages need to be added


The Water Quality datasheet had averages for all of the data components in the previous system and it has been requested we bring those back in to the current system.

Some Details: 1. Only applies to Water Quality datasheet. 2. Only average data entered For example, if no PH values are added, you don’t display an Average PH. 3. For Water and Air temperature, please make sure the values are displayed in both F and C 4. Include units for all average values displayed. 5. I’m not sure on location for this data as the old site did not list each of the data points before. Open for discussion. My gut feeling is an average below the row of data for each would be appropriate.

Water Temperature Value Value Value Value Average Temp F Temp C

Maybe an averages section at the top of the screen once submitted?? We will have this data to display for old submissions as well.

Below is a screen shot of the data from the old site:


Updated 15/12/2017 00:26

Fix performance and JUnit reporting bugs



This is a placeholder for fixing various problems found in production:

  1. Bluepill was using much more memory than necessary due to a bug in how we allocated memory. I suspect this is why we couldn’t run more than a few simulators at the same time.
  2. A bunch of tests were reported as succeeding when in fact the function for comparing XML reports wasn’t finding one of the reports due to a typo.

My plan is to fix the JUnit reporting and any other bugs I find.

I’m publishing work in progress in case anyone wants to c comment on the proper way to handle JUnit reporting.

Updated 15/12/2017 00:20

Bug when hitting return key on any non-search page


In #301, we requested that the “return key” trigger a submit on the search page.

Users are reporting an unintended side effect: on all pages on the site, hitting return triggers a submit action, appending a sort=undefined parameter to the request.

For example, hitting return on the homepage leads to

The worst side aspect of this bug is that when entering a keyword into the “Search within this volume” box on a FRUS volume’s landing page (e.g., entering “policy” into this one) and hitting return (an action that should and that previously would take users to now sends users to instead. Users are confused by this behavior, as they get no search results.

The confusion is compounded by the fact that the back button doesn’t take them to the previous page, as reported in #315. So users get no results and they lose their place.

Please limit the custom behavior of the return key to search pages.

Updated 14/12/2017 20:35

Spreadsheet template is form specific


The template spreadsheet made available for users should be webapp form specific: ALE: The last two columns shouldn’t be included: * biological-replicates * technical-replicates

Generic: The following columns shouldn’t be included: * Insert ALE number * Insert Flask number * Insert Isolate number * Insert Technical Replicate Number

Updated 14/12/2017 20:18

Handle transactions to contribution groups


For accounts that share contribution room (e.g. RegisteredAccounts), TransactionStrategy needs to avoid overcontributing by ensuring that it limits contributions to an account if another account in the group is also receiving contributions.

If we were doing this directly in the logic of each strategy, it would likely need to be done differently in different strategies: - In the Ordered strategy, for each account subtract the sum of transactions to its contribution_group members from its max_inflow attribute when determining a proposed inflow. - In the Weighted strategy, determine the sum of transactions planned to be added to those groups in the usual way, then reduce those transactions as necessary (e.g. proportionally to weights) if the group’s max is exceeded and recurse onto the remaining accounts with the excess.

Rather than do it that way, which is likely to make subclassing harder, add handling code at the __call__ level by adding contribution_group-aware logic to _recurse_min and/or _recurse_max.

See #4 for a related issue.

Updated 14/12/2017 20:06

Handle transactions to/from multiple accounts of the same type


TransactionStrategy determines withdrawals based on account type (e.g. ordered, weighted, etc.). At present, behaviour when multiple accounts of the same type are passed is undefined. We need to define it.

Ideally, treat them as one account and split transactions between them in a reasonable way. Consider contributing to or withdrawing from accounts proportionally to their account balances.

(Consider whether other approaches make sense; would it ever make sense to do so in an ordered way between accounts of the same type? Should TransactionStrategy be extended to support user-defined behaviour when dividing transactions between accounts of the same type? For instance, might we want to allow a user to favour contributions to one plannee’s RRSPs over the other’s, due to tax considerations? This may need to be split out into a separate issue for a future milestone.)

Updated 14/12/2017 19:52

Modify withdrawals based on tax liability


Provide means to increase withdrawals based on tax liability. For example, allow user to indicate whether withdrawals are pre-tax or post-tax and, if the latter, provide a way to determine how much must be withdrawn to cover the existing tax liability and any additional liability for the increased withdrawals.

This could be approached in a couple of ways:

  1. Predictively: Add a pre_tax_equivalent method to Tax which converts post-tax amounts to pre-tax amounts (so that the appropriate amount can then be withdrawn). We may further need to add hooks to TaxSource objects (especially Account) to determine what proportion of withdrawn amounts result in taxable income. Since this behaviour can be non-linear, it may not be practicable to do this accurately with a single call; a search algorithm may be required.
  2. After-the-fact: Make the withdrawals of the desired post-tax amount, calculate tax liability, and then add further withdrawals as required to cover tax remittance requirements (see #2 ). This may require some iteration, since additional withdrawals may increase remittances required. Consider scaling withdrawals by the taxpayers' marginal rate as a first-order estimate of required withdrawals.

Note that any overwithdrawals can be recontributed in full at the beginning of the next year, whereas underwithdrawals will impact living standard, so when in doubt favour overwithdrawals.

Updated 14/12/2017 19:39

Handle insufficient tax withholdings


Add hooks for determining tax remittances in excess of what’s required via tax withholdings at the Account or Person level. This is likely to be implemented as a method of Tax called from Forecast level.

For example: if $10,000 is owed in taxes across all taxable accounts, but only $2,000 was withheld, is the difference payable in the next year? Via installments in the same year? The Tax class could be extended to address this in a country-specific way via a tax_transactions method (which could include the refund schedule - see #1).

TaxCanada could override that method to implement Canadian tax rules. In Canada, the rule is that a shortfall of more than $3,000 must be remitted in the year via quarterly installments, otherwise it is paid in the following year.

Updated 14/12/2017 19:21

ValueError at /schools/9/add_account/


Steps to reproduce: 1. Login as Admin/Admin 2. Click “View Schools” link at the bottom of the page 3. Select a school (I used Ackerman Middle School) 4. Click “Manage Accounts” 5. Click “Add Account” 4. Fill out all fields and click “Create Account”.

See screenshot below for the fields I had filled out before clicking “Create Account.” Also attached the error output. Here is a shortend version:

ValueError at /schools/9/add_account/ “<User: hertelc1>” needs to have a value for field “id” before this many-to-many relationship can be used.



Updated 14/12/2017 19:19 2 Comments

Investigate delay in MUR/AO refresh documents appearing


We’re having an issue where after AO’s and MURs are refreshed, newly published documents are appearing intermittently for approximately 40 minutes.

  • [x] Document ID’s for legal docs, which come from postgres but seem to be changing over time: select document_id, category, filename from aouser.document where ao_id = 4530;
  • [ ] Figure out why the document ID’s are changing
  • [ ] Research logs to see if there’s anything helpful there
  • [ ] Try to replicate the issue

Example of ID’s changing:

12/14/2017 | Comment on Agenda Document No. 17-59-B by Campaign Legal Center was (ID 83657) is now (ID 83664)





Updated 14/12/2017 23:13 4 Comments

[BUG] Mongo Error topology was destroyed


Dendro Version if known (or site URL)

Please describe the actual behaviour

“Mongo Error topology was destroyed” was reported. The previous operation before this was a folder restore that was executing in

What steps can be taken to reproduce the issue?

Try to restore a folder zip of at least 2.5GB

This seems to be an issue related to the dendron service running the in prd instance. Introducing Pm2 might solve this

Updated 14/12/2017 14:00

Maternity Leave Days Flag on Application


-Where a staff has taken annual leave and wants to take maternity leave in the same calendar year; o   The system should be able to ascertain that this same staff has taken annual leave earlier in the year and flag the number of days involved.

o  The staff applying for maternity leave should be given the option to select how the shortfall in maternity leave due to annual leave is taken earlier in the year should be made up to 12 weeks.  Options for making up the shortfall are; ü  Warehoused leave ü  Leave of Absence ü  Annual leave of succeeding year

Updated 14/12/2017 11:30 1 Comments



统计9月、10月、11月、12月三个月以下数据。正常每项有总计,并有按月统计。 并统计双十一一周内11.11-11.18,双十二3天内,12.12-12.14按日的数据 所有订单以新建日期为准 1. 订单从新建到第一次确认的时长(带按活动公司的细分) 2. 订单满员情况:(带按活动公司细分) 订单总量/进满员状态订单计数统计 3. 订单第一次确认后,在出行前再次修改的比例(带按客户的细分) 4. 未能确认订单(Close状态)的改动次数和时长(带按客户的细分)

Updated 14/12/2017 08:24

Store auth in sqlite db.


Hi all. I’d like to request that auth details are stored in an indexed database instead of creating a gigantic auth.txt file.

That would prevent things like and

The drawback in my opinion in doing that would be it would make maintenance more difficult. What you think?

Updated 14/12/2017 15:03 6 Comments

Large auth.txt file causes Lua ServerThread to leak memory and hang the CPU.


Hi all, after one month of investigation I finally managed to find the culprit(sort of) causing a memory leak on my server. So, here is the error. 2017-12-14 01:32:20: ERROR[Main]: ServerError: AsyncErr: ServerThread::run Lua: OOM error from mod '*builtin*' in callback on_joinplayer(): not enough memory 2017-12-14 01:32:20: ERROR[Main]: Current Lua memory usage: 935 MB Basically after every single join memory use spikes, it don’t stay high, it fluctuates between 300MB and 1.3 GB. And often gets killed by the kernel because of that. Also, a single server thread process uses 100% of the CPU and hangs everything.

On a clean environment, with a dummy subgame, that is, no mods, not a single line of Lua the bug still happens.

I managed to isolate the issue to auth.txt. My servers auth.txt has 23721 lines. Deleting it makes se server work normally, as expected without manifesting the memory leak problem. 01:37 <Megaf> I just removed players directory and auth file and memory leak is gone 01:37 <Megaf> let me see if the problem is in auth of players 01:39 <Megaf> ok 01:39 <Megaf> confirmed benrob0329, problem is with big auth.txt 01:41 <benrob0329> Odd, file an issue? 01:41 <benrob0329> That shouldnt be done in luaspace, or even in memory 01:41 <benrob0329> (It could be done in lua if its disk reads) 01:41 <Megaf> well, the OOM message even says onjoin 01:42 <benrob0329> Should be done from disk to avoid memory issues 01:48 <Megaf> benrob0329: *should* It happens with both built in Lua and LuaJIT.

Updated 14/12/2017 18:38 6 Comments

Firewall rules aren't specifiable


At minimum, MySQL, PostgreSQL, and MS SQL are affected by this…

The ARM templates for each of those parameterize the start and end IPv4 addresses for a (single, currently) firewall rule, but we never actually collect those parameters from the inbound provisioning requests… and therefore never pass them through to the ARM templates.

These need to be specifiable because many customers will object to the default rule of -

Note also that the range - denotes all Azure internal IPv4 addresses and would be a more sensible default than -

Updated 13/12/2017 23:39

default finidat_interp_dest file name should have instance number


Bill Sacks < sacks > - 2016-02-26 10:49:37 -0700 Bugzilla Id: 2289 Bugzilla CC: andre, fischer, jedwards, mvertens, raeder, rfisher,

Kevin Raeder pointed out that the default file name for finidat_interp_dest (‘’) leads init_interp to stomp on itself when using use_init_interp with multi-instance. We should change the default to have the instance number in the file name. In the meantime, the workaround is to explicitly specify finidat_interp_dest in each instance’s user_nl_clm.

Updated 13/12/2017 18:25 2 Comments

Workaround for gnu compiler bug (7.2.0 and later): assigning to character array via associate


Bill Sacks < sacks > - 2017-09-20 15:38:17 -0600 Bugzilla Id: 2513 Bugzilla CC: andre, jedwards, rfisher,

This bug in recent versions of gfortran (affecting 7.2.0, 8.0 and possibly other versions): means that we get compilation errors like this: /Users/jedwards/cesm/cesm2_0_alpha/components/clm/src/soilbiogeochem/SoilBiogeochemDecompCascadeBGCMod.F90:425:6: decomp_pool_name_restart(i_litr1) = ‘litr1’ 1 Error: Unclassifiable statement at (1) /Users/jedwards/cesm/cesm2_0_alpha/components/clm/src/soilbiogeochem/SoilBiogeochemDecompCascadeBGCMod.F90:426:6: decomp_pool_name_history(i_litr1) = ‘LITR1’ 1 A workaround would be to set these variables directly rather than via associate statements. We could try doing a find & replace of decomp_pool_name with decomp_cascade_con%decomp_pool_name. I’m not sure if this issue appears in other places, too.

Updated 13/12/2017 18:20 2 Comments

Change to Exit Workflow


Once employee logs exit request via self service, Exit Management team (Taiwo.ikuejurojo and Samvic.akinyemi) should get notification of exit request. The current flow which is what is defined in the BRD is that HRBPs first get notification, and after completing exit interview, the supervisor gets notice and fills his comments before the Exit Management team is notified. We would like Exit Management team to get notification at the same time HRBPs get notification

Updated 14/12/2017 08:12 1 Comments

Script does not work after the obfuscation



I found a bug in your tool.

I obfuscated this script: and it does not work properly after that. (Options: compact, string array, rotate string array).

You can easily test it. Just download and open index.html in your browser (it works locally, you do not need a server). Then click “Let’s go!” button. It will show a test image at left and will start drawing resulting image at right.

Then obfuscate app.js (in “js” directory) and replace initial app.js. Open index.html again, click the “Let’s go!” button. It will show the test image at left, but will not draw the 2nd image.

I tried both NPM javascript-obfuscator module and this UI: . The result is the same: the obfuscation breaks the code.

This may be useful to find the bug: You can see 10 JS files at Combined they become that app.js script. So, you can obfuscate them 1 by 1 to find the needed code.

Please fix it. Thank you!

Updated 14/12/2017 10:16 5 Comments

Narrations disappeared


Narrations were already done and working, merged to develop ( Now on dev (and even on restructure), the narrations aren’t working anymore.

I’ve checked if the changes from are included and yes, they are there. Probably some logic changed during restructure and that might broke it.

So we need the narrations to display again.

Updated 13/12/2017 20:01

Unable to have OpenGL dependent initialization in a plugin on Linux (at least)


When a main-app plugin needs to access OpenGL context or Radium objects that are created only after the OpenGL context, the plugin crash (e.g. when request ShaderManager instance). On MacOs, all is fine due to the order the qt events are managed and when a plugin is loaded, access to the above mentioned OpenGL stuff is OK. On Linux, the plugin segfault and cause the application to crash. To solve that, we need to signal the plugin OpenGL is initialized in order to let him initialized its OpenGL dependent functionalities. I propose to add to the plugin interface such a mecanism and let the application signal all pugins that has an OpenGL Initializer when the application receive the glInitialized signal. Are you agree ?

Updated 13/12/2017 09:38 6 Comments

Express Checkout - Captured Authorizations don't allow refunds from WC order screen..??


The issue we are seeing is when we switch the Payment Action to “Authorization”, the Refund option is always grayed out and never gets activated. We have tried multiple tests. We tried doing an authorization and then capturing that authorization but after we did that, the refund button was still grayed out.

Can you enlighten us if we are missing a step or how to have the ability to do both partial and full refunds for orders that begin in “Authorization” and then are later captured.

Updated 14/12/2017 13:24 4 Comments

(M2C) duplicate code execution?


logs look like this:

I, [2017-12-12T17:29:32.650636 #17137]  INFO -- : PreservedObjectHandler(bb107kv3508, 1, 695701697, <Endpoint: {:endpoint_name=>"services-disk11", :endpoint_type_name=>"online_nfs", :endpoint_type_class=>"online", :endpoint_node=>"localhost", :storage_location=>"/services-disk11/sdr2objects", :recovery_cost=>1}>) incoming version (1) matches PreservedCopy db version
I, [2017-12-12T17:29:32.650805 #17137]  INFO -- : PreservedObjectHandler(bb107kv3508, 1, 695701697, <Endpoint: {:endpoint_name=>"services-disk11", :endpoint_type_name=>"online_nfs", :endpoint_type_class=>"online", :endpoint_node=>"localhost", :storage_location=>"/services-disk11/sdr2objects", :recovery_cost=>1}>) incoming version (1) matches PreservedObject db version
I, [2017-12-12T17:29:32.651005 #17137]  INFO -- : PreservedObjectHandler(bb107kv3508, 1, 695701697, <Endpoint: {:endpoint_name=>"services-disk11", :endpoint_type_name=>"online_nfs", :endpoint_type_class=>"online", :endpoint_node=>"localhost", :storage_location=>"/services-disk11/sdr2objects", :recovery_cost=>1}>) incoming version (1) matches PreservedCopy db version
I, [2017-12-12T17:29:32.651105 #17137]  INFO -- : PreservedObjectHandler(bb107kv3508, 1, 695701697, <Endpoint: {:endpoint_name=>"services-disk11", :endpoint_type_name=>"online_nfs", :endpoint_type_class=>"online", :endpoint_node=>"localhost", :storage_location=>"/services-d

followed by this: I, [2017-12-12T17:29:32.650876 #17137] INFO -- : PreservedObjectHandler(bb107kv3508, 1, 695701697, <Endpoint: {:endpoint_name=>"services-disk11", :endpoint_type_name=>"online_nfs", :endpoint_type_class=>"online", :endpoint_node=>"localhost", :storage_location=>"/services-disk11/sdr2objects", :recovery_cost=>1}>) PreservedCopy db object updated I, [2017-12-12T17:29:32.651170 #17137] INFO -- : PreservedObjectHandler(bb107kv3508, 1, 695701697, <Endpoint: {:endpoint_name=>"services-disk11", :endpoint_type_name=>"online_nfs", :endpoint_type_class=>"online", :endpoint_node=>"localhost", :storage_location=>"/services-disk11/sdr2objects", :recovery_cost=>1}>) PreservedCopy db object updated

Similar duplication when versions do NOT match:

E, [2017-12-12T16:42:17.414254 #8041] ERROR -- : PreservedObjectHandler(bc009nn3453, 2, 59734, <Endpoint: {:endpoint_name=>"services-disk14", :endpoint_type_name=>"online_nfs", :endpoint_type_class=>"online", :endpoint_node=>"localhost", :storage_location=>"/services-disk14/sdr2objects", :recovery_cost=>1}>) incoming version (2) has unexpected relationship to PreservedCopy db version; ERROR!
I, [2017-12-12T16:42:17.414390 #8041]  INFO -- : PreservedObjectHandler(bc009nn3453, 2, 59734, <Endpoint: {:endpoint_name=>"services-disk14", :endpoint_type_name=>"online_nfs", :endpoint_type_class=>"online", :endpoint_node=>"localhost", :storage_location=>"/services-disk14/sdr2objects", :recovery_cost=>1}>) PreservedCopy status changed from ok to expected_vers_not_found_on_storage
I, [2017-12-12T16:42:17.414459 #8041]  INFO -- : PreservedObjectHandler(bc009nn3453, 2, 59734, <Endpoint: {:endpoint_name=>"services-disk14", :endpoint_type_name=>"online_nfs", :endpoint_type_class=>"online", :endpoint_node=>"localhost", :storage_location=>"/services-disk14/sdr2objects", :recovery_cost=>1}>) PreservedCopy db object updated
E, [2017-12-12T16:42:17.414542 #8041] ERROR -- : PreservedObjectHandler(bc009nn3453, 2, 59734, <Endpoint: {:endpoint_name=>"services-disk14", :endpoint_type_name=>"online_nfs", :endpoint_type_class=>"online", :endpoint_node=>"localhost", :storage_location=>"/services-disk14/sdr2objects", :recovery_cost=>1}>) incoming version (2) has unexpected relationship to PreservedCopy db version; ERROR!
I, [2017-12-12T16:42:17.414609 #8041]  INFO -- : PreservedObjectHandler(bc009nn3453, 2, 59734, <Endpoint: {:endpoint_name=>"services-disk14", :endpoint_type_name=>"online_nfs", :endpoint_type_class=>"online", :endpoint_node=>"localhost", :storage_location=>"/services-disk14/sdr2objects", :recovery_cost=>1}>) PreservedCopy status changed from ok to expected_vers_not_found_on_storage
I, [2017-12-12T16:42:17.414671 #8041]  INFO -- : PreservedObjectHandler(bc009nn3453, 2, 59734, <Endpoint: {:endpoint_name=>"services-disk14", :endpoint_type_name=>"online_nfs", :endpoint_type_class=>"online", :endpoint_node=>"localhost", :storage_location=>"/services-disk14/sdr2objects", :recovery_cost=>1}>) PreservedCopy db object updated
Updated 13/12/2017 02:20

(M2C) logging tweaks


Chances are we do NOT want two messages (esp of diff severity levels) for this:

E, [2017-12-12T16:42:13.344995 #8041] ERROR -- : PreservedObjectHandler(bb092mr2464, 1, 117504541, <Endpoint: {:endpoint_name=>"services-disk14", :endpoint_type_name=>"online_nfs", :endpoint_type_class=>"online", :endpoint_node=>"localhost", :storage_location=>"/services-disk14/sdr2objects", :recovery_cost=>1}>) PreservedObject db object does not exist
I, [2017-12-12T16:42:13.345206 #8041]  INFO -- : PreservedObjectHandler(bb092mr2464, 1, 117504541, <Endpoint: {:endpoint_name=>"services-disk14", :endpoint_type_name=>"online_nfs", :endpoint_type_class=>"online", :endpoint_node=>"localhost", :storage_location=>"/services-disk14/sdr2objects", :recovery_cost=>1}>) added object to db as it did not exist
Updated 13/12/2017 01:35 1 Comments

(M2C) error notification by writing audit errors to DOR workflow


NOTE: this is complicated SDR hoo-ha and probably should not be attempted without the participation of John, Tommy, Joe or some other workflow-aware person.

(see #402 for high level view)


  • is a rails engine used by argo to TEST workflow services. See especially @mejackreed knows the most about this.

(in addition?) these sul-dlss/argo spec files mock calls and responses:



(probably outdated) -

Updated 12/12/2017 23:27

Formalize Table 1 process/functions


You have a fair amount of loose code floating around related to creating tables for presentation/dissemination. Need to formalize and refine the processes and code. A good example comes from STEP:

# Create the table shell
# ---------------------
table <- tibble(
  variable   = "",
  class      = "",
  no_readmit = step_clean_03 %>% bfuncs::get_group_n(readmit_30_day == 0),
  readmit    = step_clean_03 %>% bfuncs::get_group_n(readmit_30_day == 1)

# Select data frame
# -----------------
df <- step_clean_03

# Select group variable (columns)
# -------------------------------
group <- quo(readmit_30_day)

# Select variables (in order of appearance)
# -----------------------------------------
vars <- quos(age, gender, race_eth, insurance, any_trans_limitations, pt_support_score, 
             health_lit_score, high_risk, appts_total, any_sw, any_pt, any_diet, pharmacist, 
             mental_health_concerns, falls_3_months, falls_12_months,chapters_7cat_short)

# Build table
# -----------

for (i in seq_along(vars)) {

  # Figure out what type of variable it is
  class_var_i <- df %>% 
    pull(!!vars[[i]]) %>% 

  # If it's a categorical (character/factor) variable:
  # Calculate percent and 95% CI
  # Then, add that row to the table
  if (class_var_i == "character" || class_var_i == "factor") {

    row <- df %>% 
      filter(!(!!vars[[i]]))) %>% # filter out missing
      group_by(!!group, !!vars[[i]]) %>% 
      freq_table() %>% 
      format_table() %>% 
      spread(key = !!group, value = percent_row_95) %>% 
      mutate(variable = colnames(.)[1]) %>% 
      rename("class" = !!vars[[i]], "no_readmit" = `FALSE`, "readmit" = `TRUE`) %>% 
      mutate(class = as.character(class)) # Need for bind_rows below

    # Append to bottom of table
    table <- bind_rows(table, row)

  # If it's a continuous variable:
  # Calculate mean and 95% CI
  # Then, add that row to the table 
  } else {

    row <- df %>% 
      group_by(!!group) %>% 
      bfuncs::mean_table(!!vars[[i]]) %>% 
      bfuncs::format_table() %>% 
      spread(key = !!group, value = mean_95) %>% 
      rename("variable" = var, "no_readmit" = `FALSE`, "readmit" = `TRUE`)

    # Append to bottom of table
    table <- bind_rows(table, row)
  • [ ] Make dplyr-friendly functions
  • [ ] When complete, update vignette as discussed in #23
Updated 12/12/2017 20:49

Things to be worked on for 1.12 release

  • [ ] Lurker AI in water still seems borked (bobs around in place rather than moving) - similar issues apply to Blind Cave Fish
  • [x] Druid Chant sound in Altar only plays for the first Talisman construction
  • [x] Somehow you can get aspectless Aspect Vials through the Infuser (happened to guy on stream)
  • [ ] ~Modded armour render bugged in boat~
    No longer breaks the textures, but animations aren’t applied to armour. Not sure if can be fixed?
  • [ ] Dual furnace top flux slot needs looking at, apparently broken
  • [ ] Sometimes Fireflies don’t fly at all (see Darko stream) or swim underwater and drown.

Suggestions/Areas of Improvement: * [ ] Volarkite (maybe for the update after since it needed reworking) * [ ] Coarse Swamp Dirt is already a block that exists, but has no apparent current purpose. I’d suggest giving it an alternate texture and maybe making it craftable with Swamp Dirt + Silt? * [ ] It’d be cool if you could make the old Weedwood Bark block (the one with bark on all sides) with a 2x2 square of Weedwood Logs, kinda like how vanilla will be doing it * [ ] There’s a “Carved Cragrock” block that looks identical to the Chiseled Cragrock and generates in Cragrock Towers, maybe remove that and just make the tower generate chiseled cragrock? * [ ] The Gecko Cage doesn’t currently drop the Gecko when mined, it could either leave behind the entity itself or just the caught item * [ ] Patchy Islands used to have Weedwood trees that would gen in the water. If possible we could
maybe bring those back. Decoration is pretty good overall, but perhaps reduce the amount of giant
shrooms a fair bit. * [ ] TEs that require shift, right-click to use should have a bit of text pop up telling you that
when you place them down. * [ ] Weedwood tree leaves should have biome colours like grass. * [ ] I realised that we really ought to have maps. We have amate paper, so why not amate paper maps that work in the BL? Needs marker textures if locations should show up

Updated 14/12/2017 15:25 3 Comments



Existing analytics software is no longer used for the API Gateway. DataBC needs to acquire the following metrics.

  • Total hits per month

For the following APIs - Gated Geocoder - BC Route Planner - WorkBC job postings

If possible: - Also, record results to a csv file

Proposed tech: Elasticsearch Location of active December logs: To discuss in mtg Additional metrics: Can be added in a future issue.

Approach: Parse raw access log file (Nginx). Process with Logstash and inject into Elasticsearch.

Updated 13/12/2017 17:19 1 Comments

(PC) uber-ticket: Update DOR workflow with audit error findings


It is highly desirable to have audit error findings discoverable in Argo. We can use a passive workflow (one that has no robots that exists to record status information) to record such errors via the workflow services API described at

automatically provides the information to Argo indexing and display and obviates the need for additional special calls or new Argo screen work.

Updated 12/12/2017 23:13 1 Comments

Unapproved Relief Manually Inserted


Upon application of leave. An employee with no approved relief can manually insert any employee in the relief field and the system accept the relief and transaction.

Build a rule to stop the transaction

Exclude/Exception for the rule Sick leave Standard SL1 and Sick Leave SL2, reason as the HRBP or the supervisor can add a relief in the absence of the staff.

Updated 13/12/2017 11:14 2 Comments

Pascal - Generics - All Collections - Make Read-Only Interfaces TRULY Read-Only


Looking through the code produced by @LK-Daniel, I cannot help but notice that the base Collections interfaces are not truly read-only. For one thing, there’s a Clear method defined in there, which means that any consuming code operating against what SHOULD be a read-only interface could easily erase everything in that collection.

This needs addressing.

Updated 12/12/2017 09:31

return of the scope-not-found problem


related issue: #189

For both FunctionAnnotationTest and AssignmentOperatorsTest, the scope of functions declared within another function can not be found by ‘ddg.where()’. This is used when trying to determine which functions a statement calls and the respective libraries or scopes they are from.

As is the case listed in issue #189 , this is seems to be an issue related to the difference between the functions .ddg.parse.commands() and ddg.return.value() as well as how the final statement in a function is handled.

In order to get the tests to pass, should I try to fix this issue or comment out the code blocks within the test that deal with this nested function problem?

Updated 11/12/2017 23:06 2 Comments

(M2C) lib/audit/m2c single root method calls pohandler.check_existence


Currently lib/audit/m2c, when checking existence, has some logic. It should become stupider, more like seed from single disk, and the logic should be part of check_existence method (see #284)

  • [ ] do for single storage root
  • [ ] remove boolean argument for create_if_does_not_exist
  • [x] ensure call for all storage roots follows this approach. (already done)
Updated 13/12/2017 01:06

possible issue with tutorial


When I run:

learnr::run_tutorial(“introduction”, package = “ggformula”)

I get an error message: ERROR: path[1]=“”: No such file or directory

sessionInfo() R version 3.4.1 (2017-06-30) Platform: x86_64-apple-darwin15.6.0 (64-bit) Running under: macOS High Sierra 10.13.2

Matrix products: default BLAS: /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libBLAS.dylib LAPACK: /Library/Frameworks/R.framework/Versions/3.4/Resources/lib/libRlapack.dylib

locale: [1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8

attached base packages: [1] stats graphics grDevices utils datasets methods base

other attached packages: [1] bindrcpp_0.2 NHANES_2.1.0 mosaic_1.1.1 Matrix_1.2-12
[5] mosaicData_0.14.0 lattice_0.20-35 dplyr_0.7.4 ggformula_0.6
[9] ggplot2_2.2.1 tibble_1.3.4 learnr_0.9.1 shiny_1.0.5
[13] RMySQL_0.10.13 DBI_0.7

loaded via a namespace (and not attached): [1] reshape2_1.4.2 purrr_0.2.4 splines_3.4.1 colorspace_1.3-2 htmltools_0.3.6 [6] yaml_2.1.14 rlang_0.1.4 withr_2.1.0 foreign_0.8-69 glue_1.2.0
[11] bindr_0.1 plyr_1.8.4 mosaicCore_0.4.2 stringr_1.2.0 munsell_0.4.3
[16] gtable_0.2.0 htmlwidgets_0.9 psych_1.7.8 evaluate_0.10.1 knitr_1.17
[21] httpuv_1.3.5 parallel_3.4.1 markdown_0.8 broom_0.4.3 Rcpp_0.12.14
[26] xtable_1.8-2 scales_0.5.0 backports_1.1.1 jsonlite_1.5 mime_0.5
[31] gridExtra_2.3 mnormt_1.5-5 digest_0.6.12 stringi_1.1.6 grid_3.4.1
[36] rprojroot_1.2 tools_3.4.1 magrittr_1.5 lazyeval_0.2.1 ggdendro_0.1-20 [41] tidyr_0.7.2 pkgconfig_2.0.1 MASS_7.3-47 assertthat_0.2.0 rmarkdown_1.8
[46] R6_2.2.2 nlme_3.1-131 compiler_3.4.1

Updated 11/12/2017 14:01 1 Comments

Segfault on "import torch"


Hi, I have installed the pytorch using commands from official webpage sudo pip3 install sudo pip3 install torchvision (without sudo I was getting an error about permissions and after sudo and getting the segfault, I tried reinstalling it without sudo with changes permissions on the problematic folder but pip is saying I already have them all properly installed)

I am getting following error on import torch: [1] 12292 segmentation fault python3 The number (12292) varies and I couldn’t find more info

My system: Linux Mint 18.3 - Linux version 4.10.0-21-generic (buildd@lgw01-48) (gcc version 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.4) ) #23~16.04.1-Ubuntu Drivers: 384.90 CUDA: 9.0.176-1 Python: 3.5.1-3 (also 2.7 is installed)

After some research on the internet I found a couple of similar issues (no help for me though) so I also have output from gdb now: commands: gdb python3 (gdb) r ... >>> import torch [New Thread 0x7ffff342e700 (LWP 11736)] [New Thread 0x7ffff0c2d700 (LWP 11737)] [New Thread 0x7fffee42c700 (LWP 11738)] Thread 1 "python3" received signal SIGSEGV, Segmentation fault. 0x0000000000002260 in ?? ()

(gdb) bt #0 0x0000000000002260 in ?? () #1 0x00007ffff7de76ba in call_init (l=<optimized out>, argc=argc@entry=1, argv=argv@entry=0x7fffffffd4a8, env=env@entry=0xbcf5c0) at dl-init.c:72 #2 0x00007ffff7de77cb in call_init (env=0xbcf5c0, argv=0x7fffffffd4a8, argc=1, l=<optimized out>) at dl-init.c:30 #3 _dl_init (main_map=main_map@entry=0xfc9390, argc=1, argv=0x7fffffffd4a8, env=0xbcf5c0) at dl-init.c:120 #4 0x00007ffff7dec8e2 in dl_open_worker (a=a@entry=0x7fffffffaf30) at dl-open.c:575 #5 0x00007ffff7de7564 in _dl_catch_error (objname=objname@entry=0x7fffffffaf20, errstring=errstring@entry=0x7fffffffaf28, mallocedp=mallocedp@entry=0x7fffffffaf1f, operate=operate@entry=0x7ffff7dec4d0 <dl_open_worker>, args=args@entry=0x7fffffffaf30) at dl-error.c:187 #6 0x00007ffff7debda9 in _dl_open (file=0x7fffe99d40c0 "/usr/local/lib/python3.5/dist-packages/torch/", mode=-2147483391, caller_dlopen=0x60b35a <_PyImport_FindSharedFuncptr+138>, nsid=-2, argc=<optimized out>, argv=<optimized out>, env=0xbcf5c0) at dl-open.c:660 #7 0x00007ffff75ecf09 in dlopen_doit (a=a@entry=0x7fffffffb160) at dlopen.c:66 #8 0x00007ffff7de7564 in _dl_catch_error (objname=0xa7f8b0, errstring=0xa7f8b8, mallocedp=0xa7f8a8, operate=0x7ffff75eceb0 <dlopen_doit>, args=0x7fffffffb160) at dl-error.c:187 #9 0x00007ffff75ed571 in _dlerror_run (operate=operate@entry=0x7ffff75eceb0 <dlopen_doit>, args=args@entry=0x7fffffffb160) at dlerror.c:163 #10 0x00007ffff75ecfa1 in __dlopen (file=<optimized out>, mode=<optimized out>) at dlopen.c:87 #11 0x000000000060b35a in _PyImport_FindSharedFuncptr () #12 0x000000000061000b in _PyImport_LoadDynamicModuleWithSpec () #13 0x0000000000610538 in ?? () #14 0x00000000004e9c36 in PyCFunction_Call () #15 0x000000000053dbbb in PyEval_EvalFrameEx () #16 0x0000000000540199 in ?? () #17 0x000000000053c1d0 in PyEval_EvalFrameEx () #18 0x000000000053b7e4 in PyEval_EvalFrameEx () #19 0x000000000053b7e4 in PyEval_EvalFrameEx () #20 0x000000000053b7e4 in PyEval_EvalFrameEx () #21 0x000000000053b7e4 in PyEval_EvalFrameEx () #22 0x0000000000540f9b in PyEval_EvalCodeEx () #23 0x00000000004ebd23 in ?? () #24 0x00000000005c1797 in PyObject_Call () #25 0x00000000005c257a in _PyObject_CallMethodIdObjArgs () #26 0x00000000005260c8 in PyImport_ImportModuleLevelObject () #27 0x0000000000549e78 in ?? () #28 0x00000000004e9ba7 in PyCFunction_Call () #29 0x00000000005c1797 in PyObject_Call () #30 0x0000000000534d90 in PyEval_CallObjectWithKeywords () #31 0x000000000053a1c7 in PyEval_EvalFrameEx () #32 0x0000000000540199 in ?? () #33 0x0000000000540e4f in PyEval_EvalCode () #34 0x000000000054a6b8 in ?? () #35 0x00000000004e9c36 in PyCFunction_Call () #36 0x000000000053dbbb in PyEval_EvalFrameEx () #37 0x0000000000540199 in ?? () #38 0x000000000053c1d0 in PyEval_EvalFrameEx () #39 0x000000000053b7e4 in PyEval_EvalFrameEx () #40 0x000000000053b7e4 in PyEval_EvalFrameEx () #41 0x000000000053b7e4 in PyEval_EvalFrameEx () #42 0x0000000000540f9b in PyEval_EvalCodeEx () #43 0x00000000004ebd23 in ?? () #44 0x00000000005c1797 in PyObject_Call () #45 0x00000000005c257a in _PyObject_CallMethodIdObjArgs () #46 0x00000000005260c8 in PyImport_ImportModuleLevelObject () #47 0x0000000000549e78 in ?? () #48 0x00000000004e9ba7 in PyCFunction_Call () #49 0x00000000005c1797 in PyObject_Call () #50 0x0000000000534d90 in PyEval_CallObjectWithKeywords () #51 0x000000000053a1c7 in PyEval_EvalFrameEx () #52 0x0000000000540199 in ?? () #53 0x0000000000540e4f in PyEval_EvalCode () #54 0x000000000060c272 in ?? () #55 0x000000000046b89f in PyRun_InteractiveOneObject () #56 0x000000000046ba48 in PyRun_InteractiveLoopFlags () #57 0x000000000046cfa0 in ?? () #58 0x00000000004cf2bd in ?? () #59 0x00000000004cfeb1 in main ()

Updated 12/12/2017 22:08 8 Comments

java.lang.NullPointerException at $$Lambda$2.onClick (


java.lang.NullPointerException: at com.marc.browse.MainActivity.loadHub(

at <OR> com.marc.browse.MainActivity.getMimeType ( or .lambda$create$17$MainActivity ( or .access$500 ( or .access$2500 (

at com.marc.browse.MainActivity$$Lambda$2.onClick (

at android.view.View.performClick (

at android.view.View$ (

Updated 11/12/2017 10:01

java.lang.NullPointerException at $createAndShowMenu$2$1.onClick (



at com.marc.peregrine.MainActivity$createAndShowMenu$2$1.onClick (

at$AlertParams$3.onItemClick (

at android.widget.AdapterView.performItemClick (

at android.widget.AbsListView.performItemClick (

at android.widget.AbsListView$ (

at android.widget.AbsListView$ (

Updated 11/12/2017 09:59

java.lang.IllegalStateException at onStop (


Caused by: java.lang.IllegalStateException: at com.marc.peregrine.tabs.Tabs.isEmpty(

at <OR> com.marc.peregrine.tabs.Tabs.getTabById ( or .add ( or .close ( or .forEach ( or .saveTabs ( or .addSavedTabs ( or .access$change (

at com.marc.peregrine.MainActivity.onStop (

at (

at (

at (

Updated 11/12/2017 09:56

Planning for 2.1.xx


Going to do this next release more rapidly. So lets try and create ~20 icon modifications or additions as bullet points below.

  • [x] eye-plus
  • [x] tractor
  • [ ] Network icons
  • [x] function
  • [x] reminder
  • [ ] lock-question and lock-alert
  • [ ] file-question
  • [ ] folder-edit
  • [ ] playlist-edit
  • [ ] gpu
  • [ ] scanner-off
  • [ ] amazon-alexa
  • [ ] police
  • [ ] car-limousine
  • [ ] bed-empty
  • [ ] cellphone-message
  • [ ] tumble-dryer
  • [ ] steering-off
  • [ ] Various light icons
  • [ ] security-account
  • [ ] image-plus
  • [ ] thermostat-box
  • [ ] Update bell and bell-off
  • [ ] Linux distributions?
  • [ ] Folder icons?
  • [ ] IEC/IEEE Power Statuses
Updated 14/12/2017 16:18

Fork me on GitHub