Contribute to Open Source. Search issue labels to find the right project for you!

Add option to disable automatically outputting helpfile information on bad option entry

marklogic/marklogic-contentpump

Perhaps add an option in the lo4j properties not to spit out the help file content for a misconfigured command line config option?

Some of these help file entries are pretty large, and can make it difficult to see what the input error was in environments with limited scrollback capability.

This is a minor request to improve usability in difficult environments.

Updated 02/08/2017 17:07 5 Comments

When generating pacman.conf, replace $channel

uboslinux/ubos-admin

Pacman doesn’t know how to handle variables that it hasn’t hard-coded in. It would be nice if there was a single UBOS channel variable somewhere and the pacman repos could reference that. Do this:

  • create /etc/ubos/channel if does not exist already
  • have /etc/pacman.d/repositories.d/* use $channel in their path names
  • when we generate /etc/pacman.conf, we don’t just concatenate, but also replace $channel
Updated 25/07/2017 19:16

Refactor synopsisHelp API

uboslinux/ubos-admin

It currently returns a hash of command-line to description. This forces us to repeat the documentation of all parameters for each form of command-line. That’s not good. Instead:

Return a hash of two hashes, with keys “cmds” and “args”. The previous return goes into “cmds” and the hash of command-line arguments to explanations goes into the second.

Updated 20/07/2017 03:46

Create mechanism for additional option files for mysql

uboslinux/ubos-admin

Per https://dev.mysql.com/doc/refman/5.7/en/option-files.html, apparently one can say: !includedir /home/mydir So we can create /etc/mysql.d or such. Move replication options from /etc/ubos-mysql.conf into there, and only keep those options that a new version of ubos-admin gets to overwrite. Mark all files in /etc/mysql.d as options files not to be overwritten if changed in PKGBUILD.

Updated 20/07/2017 20:07

XCC performance better than DMSDK performance during bulk ingest

marklogic/java-client-api

We have a Spring Batch-based data migration program that reads data from Oracle RDB and writes it to MarkLogic. We have a 3-node ML cluster that’s now on 9.0-1.1, was on 8.0-6.3 before.

We’ve been using XCC and a custom “pooling” approach where we create an XCC ContentSource for each host. Each batch of documents to be written to ML is then handed off to a ContentSource in round robin fashion. A new Session is then created from the ContentSource, and then session.insertContent is called with an array of Content objects. Very simple, nothing fancy. We use a ContentCreateOptions object for each document, but all we do is set the format to XML, set the collections and permissions, and set the repair level to FULL.

We now have DMSDK support in the migration tool, alongside the XCC support (which one is used is determined by a command line argument). Our DMSDK code is very simple too, basically this:

    databaseClient = DatabaseClientFactory.newClient(host, port, user, password, DatabaseClientFactory.Authentication.BASIC);
    dataMovementManager = databaseClient.newDataMovementManager();
    writeBatcher = dataMovementManager.newWriteBatcher().withBatchSize(batchSize).withThreadCount(threadCount);
    jobTicket = dataMovementManager.startJob(writeBatcher);

When the migration program gets a batch of documents to write, it then just calls this for each document in the batch:

    writeBatcher.add(doc.getUri(), doc.getMetadata(), doc.getContent());

Once all the batches are written, we have the following cleanup code:

    writeBatcher.flushAndWait();
    dataMovementManager.stopJob(jobTicket);
    dataMovementManager.release();
    databaseClient.release();

Functionally, everything works fine, but we’re consistently getting better results with our custom XCC approach. Details on the migration that we’re testing with:

  1. The migration inserts 2,948,131 documents
  2. Each document is small - just 7 elements, with each element value having less than 100 characters

Details on our 3-node cluster - each node has the same specs - 48g memory; 24 X5660 cpus at 2.80ghz; and disk storage is 500gb, with plenty of space (we’re testing against an empty database). The migration program is being run on a separate machine, with 145g memory; 24 x5675 cpus at 3.07ghz; and 500gb storage.

Here are the 4 test runs I did, all with a thread count of 64:

Library Batch Size Total Time (ms) % of DMSDK
XCC 100 339,899 76%
XCC 200 343,193 77%
DMSDK 100 453,655 102%
DMSDK 200 444,564 100%

I’m going to do a few more runs, but these are consistent with all the other runs I’ve done.

I’ve included an export of Monitoring History during the time frame where I did the above 4 runs (they were done in that order too). Interesting notes about what’s in there:

  1. CPU is significantly higher during the DMSDK runs - around 50% compared to 25%
  2. There are lots of ETC hits during the DMSDK runs, but none during the XCC runs. This confuses me.
  3. There are lots more list cache hits during the DMSDK runs, which I figure is for the same reason as the ETC hits (there are plenty of duplicate values across the 2 million plus documents).

xcc-vs-dmsdk-overview-20170718-120732.xlsx

Updated 16/08/2017 22:27 25 Comments

test suite failures due to lack of xdmp:get-server-field privilege

marklogic/java-client-api

Running the test suite from the develop branch yields 5 failures/132 errors. Many of these errors are due to the request user not having the xdmp:get-server-field privilege. Log is attached.

Tests were run in the following environment:

ML version: 9.0-1.1 OS: MacOS.
Java version: 1.8.0_102-b14.
Client API: develop branch at https://github.com/marklogic/java-client-api/commit/bdbfa6d2eff495000ac4b45b5b9385efabc4291c.
Maven: Apache Maven 3.5.0

This was a brand new installation of everything except Java: I had freshly installed Maven for the first time, ML was a bare install of 9.0-1.1, and I had newly cloned the develop branch. I ran mvn test-compile, followed by mvn exec:java@test-server-init, followed by mvn test..

When I gave the rest-writer role the xdmp:get-server-field privilege, most of the errors went away and I was left with errors that Sam Mefford was also getting, plus five semantics-related errors.

java-client-api-develop-failures.txt

Updated 17/08/2017 20:47 7 Comments

QueryManagerImpl incorrectly hardcodes start page to 1

marklogic/java-client-api

We were experimenting yesterday with adding transaction awareness to our search requests. (In a few places in the app, we open a transaction for a long request, write some data, then expect it to be available to read requests within the same transaction)

We changed from using this queryManager method: queryManager.search(queryDef, searchHandle, start) (implemented at QueryManagerImpl:140)

to this one: queryManager.search(queryDef, searchHandle, start, transaction) (implemented at QueryManagerImpl:161)

Line 162 is the problem: it calls search(queryDef, searchHandle, 1, transaction, null) replacing the start value we pass with 1, for some reason.

This has the effect of always returning page 1, instead of the page we asked for, which gets us into an infinite loop situation.

Is this what’s supposed to happen with transaction-aware searches, or is this a bug? If, as we suspect, it is a bug, when can a patch be issued?

As a workaround, we may be able to use the search function version which includes the forestName, passing null, and it looks like that ought to work.

Updated 31/07/2017 23:12 1 Comments

Spark : Jersey HTTP client: A message body writer for MultiPart not found

marklogic/java-client-api

I have following issue using java-client-api v3.0.7 with Apache Spark 1.6.1 is there any workaround ?

Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 4 in stage 10.0 failed 4 times, most recent failure: Lost task 4.3 in stage 10.0 (TID 290, dbslp1428.uhc.com): com.sun.jersey.api.client.ClientHandlerException: A message body writer for Java class com.sun.jersey.multipart.MultiPart, and Java type class com.sun.jersey.multipart.MultiPart, and MIME media type multipart/mixed; boundary=Boundary_4_818906570_1499798623185 was not found
    at com.sun.jersey.api.client.RequestWriter$RequestEntityWriterImpl.<init>(RequestWriter.java:199)
    at com.sun.jersey.api.client.RequestWriter.getRequestEntityWriter(RequestWriter.java:248)
    at com.sun.jersey.client.apache4.ApacheHttpClient4Handler.getHttpEntity(ApacheHttpClient4Handler.java:241)
    at com.sun.jersey.client.apache4.ApacheHttpClient4Handler.getUriHttpRequest(ApacheHttpClient4Handler.java:197)
    at com.sun.jersey.client.apache4.ApacheHttpClient4Handler.handle(ApacheHttpClient4Handler.java:153)
    at com.sun.jersey.api.client.filter.HTTPBasicAuthFilter.handle(HTTPBasicAuthFilter.java:81)
    at com.sun.jersey.api.client.Client.handle(Client.java:648)
    at com.sun.jersey.api.client.WebResource.handle(WebResource.java:670)
    at com.sun.jersey.api.client.WebResource.access$200(WebResource.java:74)
    at com.sun.jersey.api.client.WebResource$Builder.post(WebResource.java:563)
    at com.marklogic.client.impl.JerseyServices.doPost(JerseyServices.java:4357)
    at com.marklogic.client.impl.JerseyServices.postResource(JerseyServices.java:3583)
    at com.marklogic.client.impl.JerseyServices.postBulkDocuments(JerseyServices.java:3696)
    at com.marklogic.client.impl.DocumentManagerImpl.write(DocumentManagerImpl.java:564)
    at com.marklogic.client.impl.JSONDocumentImpl.write(JSONDocumentImpl.java:26)
    at com.marklogic.client.impl.DocumentManagerImpl.write(DocumentManagerImpl.java:557)
    at com.marklogic.client.impl.JSONDocumentImpl.write(JSONDocumentImpl.java:26)
    at com.marklogic.client.impl.DocumentManagerImpl.write(DocumentManagerImpl.java:541)
    at com.marklogic.client.impl.JSONDocumentImpl.write(JSONDocumentImpl.java:26)
    at com.optum.chwy.jobs.Claims2MLKt$main$1.call(Claims2ML.kt:116)
    at com.optum.chwy.jobs.Claims2MLKt$main$1.call(Claims2ML.kt)
    at org.apache.spark.api.java.JavaRDDLike$$anonfun$foreachPartition$1.apply(JavaRDDLike.scala:225)
    at org.apache.spark.api.java.JavaRDDLike$$anonfun$foreachPartition$1.apply(JavaRDDLike.scala:225)
    at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$33.apply(RDD.scala:920)
    at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$33.apply(RDD.scala:920)
    at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1858)
    at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1858)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
    at org.apache.spark.scheduler.Task.run(Task.scala:89)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Updated 20/07/2017 21:22 8 Comments

Improve backup-to-amazon-s3 error message

uboslinux/ubos-admin

Currently it says: ubos-admin backup-to-amazon-s3 -v INFO : ubos-admin backup-to-amazon-s3 -v INFO : Suspending sites INFO : Creating and exporting backup INFO : Resuming sites ERROR: upload failed: ../tmp/8wC4koRIJ2 to s3://ubos-backup-xxxxx/multi-xxxxx-20170708232442.ubos-backup An error occurred (NoSuchBucket) when calling the CreateMultipartUpload operation: The specified bucket does not exist

Updated 20/07/2017 03:46 1 Comments

Add plugins

uboslinux/ubos-wordpress

Need the following plugins: * indieweb https://wordpress.org/plugins/indieweb/ Themes: * independent-publisher (looks nicer) https://wordpress.org/themes/independent-publisher/ * sempress (more geared towards microformats) https://wordpress.org/themes/sempress/

Updated 18/07/2017 01:39

Verify compliance to the new official subscriptions spec

sangria-graphql/sangria

From what I saw so far, it ’s compliant, except that sangria allows to specify multiple subscription fields.

A validation rule for this: https://github.com/graphql/graphql-js/pull/882

https://github.com/graphql/graphql-js/issues/897 https://github.com/graphql/graphql-js/pull/868 https://github.com/graphql/graphql-js/pull/887 https://github.com/graphql/graphql-js/pull/888 https://github.com/graphql/graphql-js/pull/870

Updated 16/06/2017 20:59

support QBE in combined query

marklogic/java-client-api

For 9.0-2, Erik added the ability to use a QBE in a combined query. I thought it would “just work” with the Java Client API, but it doesn’t. It might be a use case we don’t give two hoots about, in which case you can stop reading now and close this. I’m leaving it out of the Java Client docs for now since it is busted.

Below is a great whacking chunk of Java that exercises a combined query with all it’s “structured” variants (structured, cts, and QBE). All the queries should return the same results, with the exception of withJsonQBE (because QBE is not document-type agnostic).

It works properly for structured query and cts query, but does something strange and unexpected for QBE: The XML QBE matches nothing, and the JSON QBE matches the XML documents. It’s almost like they’re backwards:

With standalone QBE, you can use $format to say “Use this JSON QBE to match XML” (and vice versa), but you give up that control when you stick it in a combined query. That being so, withXmlQBE() should match only XML and ‘withJsonQBE()` should match only JSON (of which there are none in the doc set). They do the opposite.

I was using the Shakespeare data, so everything but the JSON QBE should return the following. The JSON QBE flavor shouldn’t match anything unless you fake up a matchable JSON document.

  • The First Part of Henry the Fourth
  • The Second Part of Henry the Fourth

Combined query exerciser follows: ```java package examples;

import javax.xml.xpath.XPathExpression; import javax.xml.xpath.XPathFactory;

import org.w3c.dom.Document;

import com.marklogic.client.DatabaseClient; import com.marklogic.client.DatabaseClientFactory; import com.marklogic.client.io.Format; import com.marklogic.client.io.SearchHandle; import com.marklogic.client.io.StringHandle; import com.marklogic.client.io.marker.StructureWriteHandle; import com.marklogic.client.query.ExtractedItem; import com.marklogic.client.query.ExtractedResult; import com.marklogic.client.query.MatchDocumentSummary; import com.marklogic.client.query.QueryManager; import com.marklogic.client.query.RawCombinedQueryDefinition; import com.marklogic.client.query.StructuredQueryBuilder; import com.marklogic.client.query.StructuredQueryDefinition;

import javax.xml.xpath.XPathExpressionException;

public class CombinedQuery { // replace with your MarkLogic Server connection information static String HOST = “localhost”; static int PORT = 8000; static String DATABASE = “bill”; static String USER = “admin”; static String PASSWORD = “admin”; private static DatabaseClient client = DatabaseClientFactory.newClient( HOST, PORT, DATABASE, new DatabaseClientFactory.DigestAuthContext(USER, PASSWORD));

// Define query options to be included in our raw combined query. These
// options were chosen to simplify the results for purposes of this
// example. These options to the following:
//
// 1. Disable snippeting, using transform-results
// 2. Extract just the /PLAY/TITLE element from the matched documents.
//
static String XML_OPTIONS = 
    "<options xmlns=\"http://marklogic.com/appservices/search\">" +
      "<extract-document-data>" +
        "<extract-path>/PLAY/TITLE</extract-path>" +
      "</extract-document-data>" +
      "<transform-results apply=\"empty-snippet\"/>" +
      "<search-option>filtered</search-option>" +
    "</options>";
static String JSON_OPTIONS =
    "\"options\": {" +
        "\"extract-document-data\": {" +
            "\"extract-path\": \"/PLAY/TITLE\"" +
        "}," +
        "\"transform-results\": {" +
             "\"apply\": \"empty-snippet\"" +
        "}" +
    "}";

// Perform a search using a combined query. The input handle is
// assumed to contain an XML or JSON combined query.   
//
// For purposes of simplifying search result processing and output, 
// the combined query must contain either the XML_OPTIONS or JSON_OPTIONS
// defined above. The options produce a search:response in which each 
// search:match has the following form:
//
// <search:result index="n" uri="..." path="..." score="..." 
//     confidence="....4450079" fitness="0.5848901" href="..." 
//     mimetype="..." format="xml">
//   <search:snippet/>
//   <search:extracted kind="element"><TITLE>a title</TITLE></search:extracted>
// </search:result>
//
// XML DOM is used to extract the title text from the search:extracted element
// of each match.
//
public static void doSearch(StructureWriteHandle queryHandle) {
    // Create a raw combined query
    QueryManager qm = client.newQueryManager();
    RawCombinedQueryDefinition query = 
            qm.newRawCombinedQueryDefinition(queryHandle);

    // Perform the search
    SearchHandle results = qm.search(query, new SearchHandle());

    // Process the results, printint out the title of each match
    try {
        XPathExpression xpath = XPathFactory.newInstance().newXPath().compile("//TITLE");
        for (MatchDocumentSummary match : results.getMatchResults()) {
            ExtractedResult extracted = match.getExtracted();
            if (!extracted.isEmpty()) {
                for (ExtractedItem item : extracted) {
                    System.out.println(xpath.evaluate(item.getAs(Document.class)));
                }
            }
        }
    } catch (XPathExpressionException e) {
        e.printStackTrace();
    }
}

// Use a combined query containing a structured query, string query,
// and query options. A StructuredQueryBuilder is used to create the
// structured query portion. The combined query is expressed as XML.
//
public static void withXmlStructuredQuery() {
    StructuredQueryBuilder qb = new StructuredQueryBuilder();
    StructuredQueryDefinition builtSQ = 
        qb.word(qb.element("TITLE"), "henry");

    System.out.println("*** Searching using an XML structured query...");
    doSearch(new StringHandle().with(
        "<search xmlns=\"http://marklogic.com/appservices/search\">" +
            "<qtext>fourth</qtext>" +
            builtSQ.serialize() + 
            XML_OPTIONS +
        "</search>").withFormat(Format.XML));
}

// Use a combined query containing a structured query, string query,
// and query options. The combined query is expressed as JSON.
public static void withJsonStructuredQuery() {
    System.out.println("*** Searching using a JSON structured query...");
    doSearch(new StringHandle().with(
        "{\"search\" : {" +
            "\"query\": {" +
                "\"word-query\": {" +
                    "\"element\": { \"name\": \"TITLE\"}," +
                        "\"text\": [ \"henry\" ]" +
                "}" +
            "}, " +
            "\"qtext\": \"fourth\"," +
            JSON_OPTIONS +
        "} }").withFormat(Format.JSON));        
}

// Use a combined query containing a cts query, string query,
// and query options. The combined query is expressed as XML.
public static void withXmlCtsQuery() {
    System.out.println("*** Searching using an XML cts query...");
    doSearch(new StringHandle().with(
        "<search xmlns=\"http://marklogic.com/appservices/search\">" +
            "<cts:element-word-query xmlns:cts=\"http://marklogic.com/cts\">" +
              "<cts:element>TITLE</cts:element>" +
              "<cts:text xml:lang=\"en\">henry</cts:text>" +
            "</cts:element-word-query>" +
            "<qtext>fourth</qtext>" +
            XML_OPTIONS +
        "</search>").withFormat(Format.XML));
}

// Use a combined query containing a cts query, string query,
// and query options. The combined query is expressed as JSON.
public static void withJsonCtsQuery() {
    System.out.println("*** Searching using a JSON cts query...");
    doSearch(new StringHandle().with(
        "{\"search\" : {" +
            "\"ctsquery\": {" +
              "\"elementWordQuery\": {" +
                "\"element\" : [\"TITLE\"]," +
                "\"text\" : [\"henry\"]," +
                "\"options\" : [\"lang=en\"]" +
              "}" +
            "}, " +
            "\"qtext\": \"fourth\"," +
            JSON_OPTIONS +
        "} }").withFormat(Format.JSON));        
}

// Use a combined query containing a QBE, string query,
// and query options. The combined query is expressed as XML
// and will only match XML documents.
public static void withXmlQBE() {
    System.out.println("*** Searching using an XML QBE...");
    doSearch(new StringHandle().with(
        "<search xmlns=\"http://marklogic.com/appservices/search\">" +
            "<qtext>fourth</qtext>" +
            "<qbe:query  xmlns:qbe=\"http://marklogic.com/appservices/querybyexample\">" +
              "<TITLE><qbe:word>henry</qbe:word></TITLE>" +
            "</qbe:query>" +
            XML_OPTIONS +
        "</search>").withFormat(Format.XML));
}

// Use a combined query containing a QBE, string query,
// and query options. The combined query is expressed as JSON
// and will only match JSON documents.
public static void withJsonQBE() {
    System.out.println("*** Searching using a JSON QBE...");
    doSearch(new StringHandle().with(
        "{\"search\" : {" +
            "\"$query\": {" +
              "\"TITLE\" : { \"$word\": \"henry\" }" +
            "}," +
            "\"qtext\": \"fourth\"," +
            JSON_OPTIONS +
        "} }").withFormat(Format.JSON));        
}

public static void main(String[] args) {
    withXmlStructuredQuery();
    withJsonStructuredQuery();
    withXmlCtsQuery();
    withJsonCtsQuery();
    withXmlQBE();
    withJsonQBE();
}

} ```

Updated 12/07/2017 22:11 3 Comments

Improve backupinfo output

uboslinux/ubos-admin

With a backup file that only contains a single AppConfiguration, the output is this:

[root@ubos-raspberry-pi2 shepherd]# ubos-admin backupinfo --in /tmp/foo.a02d1954d84f6e48bd4ed31caedf3da02fb14ac71.ubos-backup 
Type:    ubos-backup
Created: 20170605-040324
=== Unattached AppConfigurations ===
AppConfiguration: /shaarli (a02d1954d84f6e48bd4ed31caedf3da02fb14ac71): shaarli
    app:      shaarli
         customizationpoint salt: HASH(0x1384368)
         customizationpoint timezone: HASH(0x1384348)
         customizationpoint title: HASH(0x1384328)

No idea why it prints the customizationpoints here. It doesn’t for whole sites.

Updated 20/07/2017 20:30

Better way to override `@Model` and `@Routable` constructor?

sakuraapi/api

See “Class Decorators” here: https://www.typescriptlang.org/docs/handbook/decorators.html ``` function classDecorator<T extends {new(…args:any[]):{}}>(constructor:T) { return class extends constructor { newProperty = “new property”; hello = “override”; } }

@classDecorator class Greeter { property = “property”; hello: string; constructor(m: string) { this.hello = m; } }

console.log(new Greeter(“world”)); `` Could this be used to get typing for injected methods without having to extendSakuraApiModel`?

Updated 04/06/2017 18:36

I got an IllegalStateException error using the Data Movement SDK

marklogic/java-client-api

This happened after running for over a day.

Here is the error:

27-May-2017 12:27:49.622 SEVERE [http-nio-8080-exec-39] org.apache.catalina.core.StandardWrapperValve.invoke Servlet.service() for servlet [HarvestJDBCDataServlet] in context with path [/easymetahub] threw exception [java.lang.IllegalStateException: This instance has been stopped] with root cause
 java.lang.IllegalStateException: This instance has been stopped
    at com.marklogic.client.datamovement.impl.WriteBatcherImpl.requireNotStopped(WriteBatcherImpl.java:347)
    at com.marklogic.client.datamovement.impl.WriteBatcherImpl.add(WriteBatcherImpl.java:283)
    at com.marklogic.client.datamovement.impl.WriteBatcherImpl.add(WriteBatcherImpl.java:267)
    at com.easymetahub.HarvestJDBCData.doRoot(HarvestJDBCData.java:485)
    at com.easymetahub.HarvestJDBCData.doSomething(HarvestJDBCData.java:224)
    at com.easymetahub.HarvestJDBCData.processBatchSegment(HarvestJDBCData.java:120)
    at com.easymetahub.HarvestJDBCDataServlet.doPost(HarvestJDBCDataServlet.java:33)
    at com.easymetahub.HarvestJDBCDataServlet.doGet(HarvestJDBCDataServlet.java:45)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:622)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:729)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:232)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:165)
    at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:165)
    at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:199)
    at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:105)
    at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:506)
    at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:140)
    at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:79)
    at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:620)
    at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:87)
    at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:343)
    at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:1078)
    at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66)
    at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:760)
    at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1524)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
    at java.lang.Thread.run(Thread.java:745)
Updated 05/06/2017 18:26 7 Comments

Flr708 709

LigaData/Kamanja

Jira Tasks - https://ligadata.atlassian.net/browse/FLR-708 and https://ligadata.atlassian.net/browse/FLR-709

Updated 25/06/2017 05:58 1 Comments

SPARQLManagerTest testConstrainingQueries asserts on tuples size.

marklogic/java-client-api

Able to reproduce this issue locally. Tested on Windows build on develop branch with 10.0-20170504 server build. Here is the stack trace from QA nightly regression server run and Http wire trace from Windows Laptop run:

<testcase classname="com.marklogic.client.test.SPARQLManagerTest" name="testConstrainingQueries" time="0.576">
    <failure message="expected:&lt;1&gt; but was:&lt;2&gt;" type="java.lang.AssertionError">java.lang.AssertionError: expected:&lt;1&gt; but was:&lt;2&gt;
    at org.junit.Assert.fail(Assert.java:88)
    at org.junit.Assert.failNotEquals(Assert.java:834)
    at org.junit.Assert.assertEquals(Assert.java:645)
    at org.junit.Assert.assertEquals(Assert.java:631)
    at com.marklogic.client.test.SPARQLManagerTest.testConstrainingQueries(SPARQLManagerTest.java:234)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
    at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
    at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
    at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
    at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
    at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
    at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
    at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
    at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
    at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
    at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
    at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
    at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
    at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
    at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
    at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
    at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
    at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
    at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
    at org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
    at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
    at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)
</failure>

Wire Trace: 10:55:45.824 [main] DEBUG c.m.client.impl.JerseyServices - Connecting to localhost at 8011 as rest-writer 10:55:46.183 [main] DEBUG c.m.client.impl.JerseyServices - Putting graphs 2017/05/05 10:55:46:315 PDT [DEBUG] ThreadSafeClientConnManager - Get connection: {}->http://localhost:8011, timeout = 0 2017/05/05 10:55:46:319 PDT [DEBUG] ConnPoolByRoute - [{}->http://localhost:8011] total kept alive: 0, total issued: 0, total allocated: 0 out of 200 2017/05/05 10:55:46:319 PDT [DEBUG] ConnPoolByRoute - No free connections [{}->http://localhost:8011][null] 2017/05/05 10:55:46:319 PDT [DEBUG] ConnPoolByRoute - Available capacity: 100 out of 100 [{}->http://localhost:8011][null] 2017/05/05 10:55:46:319 PDT [DEBUG] ConnPoolByRoute - Creating new connection [{}->http://localhost:8011] 2017/05/05 10:55:46:331 PDT [DEBUG] DefaultClientConnectionOperator - Connecting to localhost:8011 2017/05/05 10:55:46:348 PDT [DEBUG] RequestAddCookies - CookieSpec selected: ignoreCookies 2017/05/05 10:55:46:350 PDT [DEBUG] RequestAuthCache - Auth cache not set in the context 2017/05/05 10:55:46:350 PDT [DEBUG] RequestTargetAuthentication - Target auth state: UNCHALLENGED 2017/05/05 10:55:46:350 PDT [DEBUG] RequestProxyAuthentication - Proxy auth state: UNCHALLENGED 2017/05/05 10:55:46:350 PDT [DEBUG] DefaultHttpClient - Attempt 1 to execute request 2017/05/05 10:55:46:351 PDT [DEBUG] DefaultClientConnection - Sending request: HEAD /v1/ping HTTP/1.1 2017/05/05 10:55:46:351 PDT [DEBUG] wire - >> "HEAD /v1/ping HTTP/1.1[\r][\n]" 2017/05/05 10:55:46:352 PDT [DEBUG] wire - >> "Host: localhost:8011[\r][\n]" 2017/05/05 10:55:46:352 PDT [DEBUG] wire - >> "Connection: Keep-Alive[\r][\n]" 2017/05/05 10:55:46:352 PDT [DEBUG] wire - >> "[\r][\n]" 2017/05/05 10:55:46:353 PDT [DEBUG] wire - << "HTTP/1.1 401 Unauthorized[\r][\n]" 2017/05/05 10:55:46:354 PDT [DEBUG] wire - << "Server: MarkLogic[\r][\n]" 2017/05/05 10:55:46:354 PDT [DEBUG] wire - << "WWW-Authenticate: Digest realm="public", qop="auth", nonce="8ff3995ac607275a97c77723d20c1f15", opaque="1a0feacf356abab0"[\r][\n]" 2017/05/05 10:55:46:354 PDT [DEBUG] wire - << "Content-Type: application/json; charset=utf-8[\r][\n]" 2017/05/05 10:55:46:354 PDT [DEBUG] wire - << "Content-Length: 0[\r][\n]" 2017/05/05 10:55:46:354 PDT [DEBUG] wire - << "Connection: Keep-Alive[\r][\n]" 2017/05/05 10:55:46:355 PDT [DEBUG] wire - << "Keep-Alive: timeout=5[\r][\n]" 2017/05/05 10:55:46:355 PDT [DEBUG] wire - << "[\r][\n]" 2017/05/05 10:55:46:355 PDT [DEBUG] DefaultClientConnection - Receiving response: HTTP/1.1 401 Unauthorized 2017/05/05 10:55:46:359 PDT [DEBUG] DefaultHttpClient - Connection can be kept alive for 5000 MILLISECONDS 2017/05/05 10:55:46:359 PDT [DEBUG] DefaultHttpClient - Authentication required 2017/05/05 10:55:46:359 PDT [DEBUG] DefaultHttpClient - localhost:8011 requested authentication 2017/05/05 10:55:46:360 PDT [DEBUG] TargetAuthenticationStrategy - Authentication schemes in the order of preference: [Negotiate, Kerberos, NTLM, Digest, Basic] 2017/05/05 10:55:46:360 PDT [DEBUG] TargetAuthenticationStrategy - Challenge for Negotiate authentication scheme not available 2017/05/05 10:55:46:360 PDT [DEBUG] TargetAuthenticationStrategy - Challenge for Kerberos authentication scheme not available 2017/05/05 10:55:46:360 PDT [DEBUG] TargetAuthenticationStrategy - Challenge for NTLM authentication scheme not available 2017/05/05 10:55:46:364 PDT [DEBUG] TargetAuthenticationStrategy - Challenge for Basic authentication scheme not available 2017/05/05 10:55:46:364 PDT [DEBUG] ThreadSafeClientConnManager - Released connection is reusable. 2017/05/05 10:55:46:365 PDT [DEBUG] ConnPoolByRoute - Releasing connection [{}->http://localhost:8011][null] 2017/05/05 10:55:46:365 PDT [DEBUG] ConnPoolByRoute - Pooling connection [{}->http://localhost:8011][null]; keep alive for 5000 MILLISECONDS 2017/05/05 10:55:46:365 PDT [DEBUG] ConnPoolByRoute - Notifying no-one, there are no waiting threads 2017/05/05 10:55:46:581 PDT [DEBUG] ThreadSafeClientConnManager - Get connection: {}->http://localhost:8011, timeout = 0 2017/05/05 10:55:46:581 PDT [DEBUG] ConnPoolByRoute - [{}->http://localhost:8011] total kept alive: 1, total issued: 0, total allocated: 1 out of 200 2017/05/05 10:55:46:581 PDT [DEBUG] ConnPoolByRoute - Getting free connection [{}->http://localhost:8011][null] 2017/05/05 10:55:46:581 PDT [DEBUG] DefaultHttpClient - Stale connection check 2017/05/05 10:55:46:583 PDT [DEBUG] RequestAddCookies - CookieSpec selected: ignoreCookies 2017/05/05 10:55:46:583 PDT [DEBUG] RequestAuthCache - Auth cache not set in the context 2017/05/05 10:55:46:583 PDT [DEBUG] RequestProxyAuthentication - Proxy auth state: UNCHALLENGED 2017/05/05 10:55:46:583 PDT [DEBUG] DefaultHttpClient - Attempt 1 to execute request 2017/05/05 10:55:46:583 PDT [DEBUG] DefaultClientConnection - Sending request: HEAD /v1/ping HTTP/1.1 2017/05/05 10:55:46:583 PDT [DEBUG] wire - >> "HEAD /v1/ping HTTP/1.1[\r][\n]" 2017/05/05 10:55:46:583 PDT [DEBUG] wire - >> "Authorization: Digest username="rest-writer",realm="public",nonce="8ff3995ac607275a97c77723d20c1f15",opaque="1a0feacf356abab0",qop=auth,uri="/v1/ping",cnonce="702467b5",nc=00000001,response="3ea86272e2346bfb2f78d1a856118341"[\r][\n]" 2017/05/05 10:55:46:583 PDT [DEBUG] wire - >> "Host: localhost:8011[\r][\n]" 2017/05/05 10:55:46:584 PDT [DEBUG] wire - >> "Connection: Keep-Alive[\r][\n]" 2017/05/05 10:55:46:584 PDT [DEBUG] wire - >> "[\r][\n]" 2017/05/05 10:55:46:591 PDT [DEBUG] wire - << "HTTP/1.1 204 No Content[\r][\n]" 2017/05/05 10:55:46:592 PDT [DEBUG] wire - << "Server: MarkLogic[\r][\n]" 2017/05/05 10:55:46:592 PDT [DEBUG] wire - << "Content-Length: 0[\r][\n]" 2017/05/05 10:55:46:592 PDT [DEBUG] wire - << "Connection: Keep-Alive[\r][\n]" 2017/05/05 10:55:46:593 PDT [DEBUG] wire - << "Keep-Alive: timeout=5[\r][\n]" 2017/05/05 10:55:46:593 PDT [DEBUG] wire - << "[\r][\n]" 2017/05/05 10:55:46:593 PDT [DEBUG] DefaultClientConnection - Receiving response: HTTP/1.1 204 No Content 2017/05/05 10:55:46:593 PDT [DEBUG] DefaultHttpClient - Connection can be kept alive for 5000 MILLISECONDS 2017/05/05 10:55:46:593 PDT [DEBUG] ThreadSafeClientConnManager - Released connection is reusable. 2017/05/05 10:55:46:593 PDT [DEBUG] ConnPoolByRoute - Releasing connection [{}->http://localhost:8011][null] 2017/05/05 10:55:46:593 PDT [DEBUG] ConnPoolByRoute - Pooling connection [{}->http://localhost:8011][null]; keep alive for 5000 MILLISECONDS 2017/05/05 10:55:46:593 PDT [DEBUG] ConnPoolByRoute - Notifying no-one, there are no waiting threads 2017/05/05 10:55:46:599 PDT [DEBUG] ThreadSafeClientConnManager - Get connection: {}->http://localhost:8011, timeout = 0 2017/05/05 10:55:46:599 PDT [DEBUG] ConnPoolByRoute - [{}->http://localhost:8011] total kept alive: 1, total issued: 0, total allocated: 1 out of 200 2017/05/05 10:55:46:599 PDT [DEBUG] ConnPoolByRoute - Getting free connection [{}->http://localhost:8011][null] 2017/05/05 10:55:46:599 PDT [DEBUG] DefaultHttpClient - Stale connection check 2017/05/05 10:55:46:601 PDT [DEBUG] RequestAddCookies - CookieSpec selected: ignoreCookies 2017/05/05 10:55:46:601 PDT [DEBUG] RequestAuthCache - Auth cache not set in the context 2017/05/05 10:55:46:601 PDT [DEBUG] RequestProxyAuthentication - Proxy auth state: UNCHALLENGED 2017/05/05 10:55:46:601 PDT [DEBUG] DefaultHttpClient - Attempt 1 to execute request 2017/05/05 10:55:46:602 PDT [DEBUG] DefaultClientConnection - Sending request: PUT /v1/graphs?graph=http://marklogic.com/java/SPARQLManagerTest HTTP/1.1 2017/05/05 10:55:46:602 PDT [DEBUG] wire - >> "PUT /v1/graphs?graph=http://marklogic.com/java/SPARQLManagerTest HTTP/1.1[\r][\n]" 2017/05/05 10:55:46:602 PDT [DEBUG] wire - >> "Content-Type: application/n-triples[\r][\n]" 2017/05/05 10:55:46:602 PDT [DEBUG] wire - >> "ML-Agent-ID: java[\r][\n]" 2017/05/05 10:55:46:602 PDT [DEBUG] wire - >> "Authorization: Digest username="rest-writer",realm="public",nonce="8ff3995ac607275a97c77723d20c1f15",opaque="1a0feacf356abab0",qop=auth,uri="/v1/graphs",cnonce="ba61d2f2",nc=00000002,response="8c2c35586178ba57c03e75cdd9739066"[\r][\n]" 2017/05/05 10:55:46:602 PDT [DEBUG] wire - >> "Transfer-Encoding: chunked[\r][\n]" 2017/05/05 10:55:46:602 PDT [DEBUG] wire - >> "Host: localhost:8011[\r][\n]" 2017/05/05 10:55:46:602 PDT [DEBUG] wire - >> "Connection: Keep-Alive[\r][\n]" 2017/05/05 10:55:46:602 PDT [DEBUG] wire - >> "[\r][\n]" 2017/05/05 10:55:46:602 PDT [DEBUG] wire - >> "91[\r][\n]" 2017/05/05 10:55:46:602 PDT [DEBUG] wire - >> "<http://example.org/s1> <http://example.org/p1> <http://example.org/o1>.[\n]" 2017/05/05 10:55:46:603 PDT [DEBUG] wire - >> "<http://example.org/s2> <http://example.org/p2> <http://example.org/o2>." 2017/05/05 10:55:46:603 PDT [DEBUG] wire - >> "[\r][\n]" 2017/05/05 10:55:46:603 PDT [DEBUG] wire - >> "0[\r][\n]" 2017/05/05 10:55:46:603 PDT [DEBUG] wire - >> "[\r][\n]" 2017/05/05 10:55:46:611 PDT [DEBUG] wire - << "HTTP/1.1 201 Created[\r][\n]" 2017/05/05 10:55:46:612 PDT [DEBUG] wire - << "Location: [\r][\n]" 2017/05/05 10:55:46:612 PDT [DEBUG] wire - << "Server: MarkLogic[\r][\n]" 2017/05/05 10:55:46:612 PDT [DEBUG] wire - << "Content-Type: text/plain; charset=UTF-8[\r][\n]" 2017/05/05 10:55:46:612 PDT [DEBUG] wire - << "Content-Length: 33[\r][\n]" 2017/05/05 10:55:46:612 PDT [DEBUG] wire - << "Connection: Keep-Alive[\r][\n]" 2017/05/05 10:55:46:612 PDT [DEBUG] wire - << "Keep-Alive: timeout=5[\r][\n]" 2017/05/05 10:55:46:612 PDT [DEBUG] wire - << "[\r][\n]" 2017/05/05 10:55:46:612 PDT [DEBUG] DefaultClientConnection - Receiving response: HTTP/1.1 201 Created 2017/05/05 10:55:46:613 PDT [DEBUG] DefaultHttpClient - Connection can be kept alive for 5000 MILLISECONDS 2017/05/05 10:55:46:615 PDT [DEBUG] wire - << "/triplestore/63b7860eb421b599.xml" 2017/05/05 10:55:46:620 PDT [DEBUG] ThreadSafeClientConnManager - Released connection is reusable. 2017/05/05 10:55:46:620 PDT [DEBUG] ConnPoolByRoute - Releasing connection [{}->http://localhost:8011][null] 2017/05/05 10:55:46:620 PDT [DEBUG] ConnPoolByRoute - Pooling connection [{}->http://localhost:8011][null]; keep alive for 5000 MILLISECONDS 2017/05/05 10:55:46:620 PDT [DEBUG] ConnPoolByRoute - Notifying no-one, there are no waiting threads 10:55:46.625 [main] DEBUG c.m.client.impl.JerseyServices - Putting graphs 2017/05/05 10:55:46:626 PDT [DEBUG] ThreadSafeClientConnManager - Get connection: {}->http://localhost:8011, timeout = 0 2017/05/05 10:55:46:626 PDT [DEBUG] ConnPoolByRoute - [{}->http://localhost:8011] total kept alive: 1, total issued: 0, total allocated: 1 out of 200 2017/05/05 10:55:46:626 PDT [DEBUG] ConnPoolByRoute - Getting free connection [{}->http://localhost:8011][null] 2017/05/05 10:55:46:626 PDT [DEBUG] DefaultHttpClient - Stale connection check 2017/05/05 10:55:46:627 PDT [DEBUG] RequestAddCookies - CookieSpec selected: ignoreCookies 2017/05/05 10:55:46:627 PDT [DEBUG] RequestAuthCache - Auth cache not set in the context 2017/05/05 10:55:46:627 PDT [DEBUG] RequestProxyAuthentication - Proxy auth state: UNCHALLENGED 2017/05/05 10:55:46:627 PDT [DEBUG] DefaultHttpClient - Attempt 1 to execute request 2017/05/05 10:55:46:627 PDT [DEBUG] DefaultClientConnection - Sending request: PUT /v1/graphs?graph=SPARQLManagerTest.testConstrainingQueries HTTP/1.1 2017/05/05 10:55:46:627 PDT [DEBUG] wire - >> "PUT /v1/graphs?graph=SPARQLManagerTest.testConstrainingQueries HTTP/1.1[\r][\n]" 2017/05/05 10:55:46:627 PDT [DEBUG] wire - >> "Content-Type: application/n-triples[\r][\n]" 2017/05/05 10:55:46:627 PDT [DEBUG] wire - >> "ML-Agent-ID: java[\r][\n]" 2017/05/05 10:55:46:627 PDT [DEBUG] wire - >> "Authorization: Digest username="rest-writer",realm="public",nonce="8ff3995ac607275a97c77723d20c1f15",opaque="1a0feacf356abab0",qop=auth,uri="/v1/graphs",cnonce="0c65a878",nc=00000003,response="b1484a71e4caff9d07676502bed570ec"[\r][\n]" 2017/05/05 10:55:46:627 PDT [DEBUG] wire - >> "Transfer-Encoding: chunked[\r][\n]" 2017/05/05 10:55:46:628 PDT [DEBUG] wire - >> "Host: localhost:8011[\r][\n]" 2017/05/05 10:55:46:628 PDT [DEBUG] wire - >> "Connection: Keep-Alive[\r][\n]" 2017/05/05 10:55:46:628 PDT [DEBUG] wire - >> "[\r][\n]" 2017/05/05 10:55:46:628 PDT [DEBUG] wire - >> "38[\r][\n]" 2017/05/05 10:55:46:628 PDT [DEBUG] wire - >> "<http://example.org/s1> <http://example.org/p1> 'test1'." 2017/05/05 10:55:46:628 PDT [DEBUG] wire - >> "[\r][\n]" 2017/05/05 10:55:46:628 PDT [DEBUG] wire - >> "0[\r][\n]" 2017/05/05 10:55:46:628 PDT [DEBUG] wire - >> "[\r][\n]" 2017/05/05 10:55:46:635 PDT [DEBUG] wire - << "HTTP/1.1 204 Updated[\r][\n]" 2017/05/05 10:55:46:635 PDT [DEBUG] wire - << "Server: MarkLogic[\r][\n]" 2017/05/05 10:55:46:635 PDT [DEBUG] wire - << "Content-Length: 0[\r][\n]" 2017/05/05 10:55:46:635 PDT [DEBUG] wire - << "Connection: Keep-Alive[\r][\n]" 2017/05/05 10:55:46:635 PDT [DEBUG] wire - << "Keep-Alive: timeout=5[\r][\n]" 2017/05/05 10:55:46:635 PDT [DEBUG] wire - << "[\r][\n]" 2017/05/05 10:55:46:635 PDT [DEBUG] DefaultClientConnection - Receiving response: HTTP/1.1 204 Updated 2017/05/05 10:55:46:636 PDT [DEBUG] DefaultHttpClient - Connection can be kept alive for 5000 MILLISECONDS 2017/05/05 10:55:46:636 PDT [DEBUG] ThreadSafeClientConnManager - Released connection is reusable. 2017/05/05 10:55:46:636 PDT [DEBUG] ConnPoolByRoute - Releasing connection [{}->http://localhost:8011][null] 2017/05/05 10:55:46:636 PDT [DEBUG] ConnPoolByRoute - Pooling connection [{}->http://localhost:8011][null]; keep alive for 5000 MILLISECONDS 2017/05/05 10:55:46:636 PDT [DEBUG] ConnPoolByRoute - Notifying no-one, there are no waiting threads 10:55:46.657 [main] INFO c.m.client.impl.DocumentManagerImpl - Writing content for SPARQLManagerTest.testConstrainingQueries/embededTriple.xml 10:55:46.658 [main] DEBUG c.m.client.impl.JerseyServices - Sending SPARQLManagerTest.testConstrainingQueries/embededTriple.xml multipart document in transaction null 2017/05/05 10:55:46:661 PDT [DEBUG] ThreadSafeClientConnManager - Get connection: {}->http://localhost:8011, timeout = 0 2017/05/05 10:55:46:661 PDT [DEBUG] ConnPoolByRoute - [{}->http://localhost:8011] total kept alive: 1, total issued: 0, total allocated: 1 out of 200 2017/05/05 10:55:46:661 PDT [DEBUG] ConnPoolByRoute - Getting free connection [{}->http://localhost:8011][null] 2017/05/05 10:55:46:661 PDT [DEBUG] DefaultHttpClient - Stale connection check 2017/05/05 10:55:46:663 PDT [DEBUG] RequestAddCookies - CookieSpec selected: ignoreCookies 2017/05/05 10:55:46:663 PDT [DEBUG] RequestAuthCache - Auth cache not set in the context 2017/05/05 10:55:46:663 PDT [DEBUG] RequestProxyAuthentication - Proxy auth state: UNCHALLENGED 2017/05/05 10:55:46:663 PDT [DEBUG] DefaultHttpClient - Attempt 1 to execute request 2017/05/05 10:55:46:663 PDT [DEBUG] DefaultClientConnection - Sending request: PUT /v1/documents?category=content&category=metadata&uri=SPARQLManagerTest.testConstrainingQueries/embededTriple.xml HTTP/1.1 2017/05/05 10:55:46:663 PDT [DEBUG] wire - >> "PUT /v1/documents?category=content&category=metadata&uri=SPARQLManagerTest.testConstrainingQueries/embededTriple.xml HTTP/1.1[\r][\n]" 2017/05/05 10:55:46:663 PDT [DEBUG] wire - >> "ML-Agent-ID: java[\r][\n]" 2017/05/05 10:55:46:663 PDT [DEBUG] wire - >> "Content-Type: multipart/mixed; boundary=Boundary_1_417797183_1494006946659[\r][\n]" 2017/05/05 10:55:46:664 PDT [DEBUG] wire - >> "Authorization: Digest username="rest-writer",realm="public",nonce="8ff3995ac607275a97c77723d20c1f15",opaque="1a0feacf356abab0",qop=auth,uri="/v1/documents",cnonce="8243bc2d",nc=00000004,response="e208dd7da59f7854f8f07900cea66a4d"[\r][\n]" 2017/05/05 10:55:46:664 PDT [DEBUG] wire - >> "Transfer-Encoding: chunked[\r][\n]" 2017/05/05 10:55:46:664 PDT [DEBUG] wire - >> "Host: localhost:8011[\r][\n]" 2017/05/05 10:55:46:664 PDT [DEBUG] wire - >> "Connection: Keep-Alive[\r][\n]" 2017/05/05 10:55:46:664 PDT [DEBUG] wire - >> "[\r][\n]" 2017/05/05 10:55:46:664 PDT [DEBUG] wire - >> "47[\r][\n]" 2017/05/05 10:55:46:664 PDT [DEBUG] wire - >> "--Boundary_1_417797183_1494006946659[\r][\n]" 2017/05/05 10:55:46:664 PDT [DEBUG] wire - >> "Content-Type: application/xml[\r][\n]" 2017/05/05 10:55:46:664 PDT [DEBUG] wire - >> "[\r][\n]" 2017/05/05 10:55:46:664 PDT [DEBUG] wire - >> "[\r][\n]" 2017/05/05 10:55:46:698 PDT [DEBUG] wire - >> "111[\r][\n]" 2017/05/05 10:55:46:698 PDT [DEBUG] wire - >> "<?xml version='1.0' encoding='utf-8'?><rapi:metadata xmlns:rapi="http://marklogic.com/rest-api" xmlns:prop="http://marklogic.com/xdmp/property"><rapi:collections><rapi:collection>SPARQLManagerTest.testConstrainingQueries</rapi:collection></rapi:collections></rapi:metadata>" 2017/05/05 10:55:46:698 PDT [DEBUG] wire - >> "[\r][\n]" 2017/05/05 10:55:46:698 PDT [DEBUG] wire - >> "49[\r][\n]" 2017/05/05 10:55:46:698 PDT [DEBUG] wire - >> "[\r][\n]" 2017/05/05 10:55:46:698 PDT [DEBUG] wire - >> "--Boundary_1_417797183_1494006946659[\r][\n]" 2017/05/05 10:55:46:698 PDT [DEBUG] wire - >> "Content-Type: application/xml[\r][\n]" 2017/05/05 10:55:46:698 PDT [DEBUG] wire - >> "[\r][\n]" 2017/05/05 10:55:46:698 PDT [DEBUG] wire - >> "[\r][\n]" 2017/05/05 10:55:46:699 PDT [DEBUG] wire - >> "161[\r][\n]" 2017/05/05 10:55:46:699 PDT [DEBUG] wire - >> "<xml><test2>testValue</test2><sem:triples xmlns:sem='http://marklogic.com/semantics'><sem:triple><sem:subject>http://example.org/s2</sem:subject><sem:predicate>http://example.org/p2</sem:predicate><sem:object datatype='http://www.w3.org/2001/XMLSchema#string'>test2</sem:object></sem:triple></sem:triples></xml>[\r][\n]" 2017/05/05 10:55:46:699 PDT [DEBUG] wire - >> "--Boundary_1_417797183_1494006946659--[\r][\n]" 2017/05/05 10:55:46:699 PDT [DEBUG] wire - >> "[\r][\n]" 2017/05/05 10:55:46:699 PDT [DEBUG] wire - >> "0[\r][\n]" 2017/05/05 10:55:46:699 PDT [DEBUG] wire - >> "[\r][\n]" 2017/05/05 10:55:46:701 PDT [DEBUG] wire - << "HTTP/1.1 204 Content Updated[\r][\n]" 2017/05/05 10:55:46:701 PDT [DEBUG] wire - << "Server: MarkLogic[\r][\n]" 2017/05/05 10:55:46:701 PDT [DEBUG] wire - << "Content-Length: 0[\r][\n]" 2017/05/05 10:55:46:701 PDT [DEBUG] wire - << "Connection: Keep-Alive[\r][\n]" 2017/05/05 10:55:46:701 PDT [DEBUG] wire - << "Keep-Alive: timeout=5[\r][\n]" 2017/05/05 10:55:46:701 PDT [DEBUG] wire - << "[\r][\n]" 2017/05/05 10:55:46:701 PDT [DEBUG] DefaultClientConnection - Receiving response: HTTP/1.1 204 Content Updated 2017/05/05 10:55:46:701 PDT [DEBUG] DefaultHttpClient - Connection can be kept alive for 5000 MILLISECONDS 2017/05/05 10:55:46:701 PDT [DEBUG] ThreadSafeClientConnManager - Released connection is reusable. 2017/05/05 10:55:46:701 PDT [DEBUG] ConnPoolByRoute - Releasing connection [{}->http://localhost:8011][null] 2017/05/05 10:55:46:702 PDT [DEBUG] ConnPoolByRoute - Pooling connection [{}->http://localhost:8011][null]; keep alive for 5000 MILLISECONDS 2017/05/05 10:55:46:702 PDT [DEBUG] ConnPoolByRoute - Notifying no-one, there are no waiting threads 10:55:46.711 [main] DEBUG c.m.client.impl.JerseyServices - Posting /graphs/sparql 2017/05/05 10:55:46:713 PDT [DEBUG] ThreadSafeClientConnManager - Get connection: {}->http://localhost:8011, timeout = 0 2017/05/05 10:55:46:713 PDT [DEBUG] ConnPoolByRoute - [{}->http://localhost:8011] total kept alive: 1, total issued: 0, total allocated: 1 out of 200 2017/05/05 10:55:46:713 PDT [DEBUG] ConnPoolByRoute - Getting free connection [{}->http://localhost:8011][null] 2017/05/05 10:55:46:713 PDT [DEBUG] DefaultHttpClient - Stale connection check 2017/05/05 10:55:46:714 PDT [DEBUG] RequestAddCookies - CookieSpec selected: ignoreCookies 2017/05/05 10:55:46:714 PDT [DEBUG] RequestAuthCache - Auth cache not set in the context 2017/05/05 10:55:46:714 PDT [DEBUG] RequestProxyAuthentication - Proxy auth state: UNCHALLENGED 2017/05/05 10:55:46:714 PDT [DEBUG] DefaultHttpClient - Attempt 1 to execute request 2017/05/05 10:55:46:714 PDT [DEBUG] DefaultClientConnection - Sending request: POST /v1/graphs/sparql?default-rulesets=exclude&collection=SPARQLManagerTest.testConstrainingQueries HTTP/1.1 2017/05/05 10:55:46:714 PDT [DEBUG] wire - >> "POST /v1/graphs/sparql?default-rulesets=exclude&collection=SPARQLManagerTest.testConstrainingQueries HTTP/1.1[\r][\n]" 2017/05/05 10:55:46:714 PDT [DEBUG] wire - >> "Content-Type: application/xml[\r][\n]" 2017/05/05 10:55:46:714 PDT [DEBUG] wire - >> "Accept: application/json[\r][\n]" 2017/05/05 10:55:46:714 PDT [DEBUG] wire - >> "ML-Agent-ID: java[\r][\n]" 2017/05/05 10:55:46:715 PDT [DEBUG] wire - >> "Authorization: Digest username="rest-writer",realm="public",nonce="8ff3995ac607275a97c77723d20c1f15",opaque="1a0feacf356abab0",qop=auth,uri="/v1/graphs/sparql",cnonce="163de452",nc=00000005,response="9f4f607b15cf6fc8e7fa22e349cb86d8"[\r][\n]" 2017/05/05 10:55:46:715 PDT [DEBUG] wire - >> "Transfer-Encoding: chunked[\r][\n]" 2017/05/05 10:55:46:715 PDT [DEBUG] wire - >> "Host: localhost:8011[\r][\n]" 2017/05/05 10:55:46:715 PDT [DEBUG] wire - >> "Connection: Keep-Alive[\r][\n]" 2017/05/05 10:55:46:715 PDT [DEBUG] wire - >> "[\r][\n]" 2017/05/05 10:55:46:715 PDT [DEBUG] wire - >> "b2[\r][\n]" 2017/05/05 10:55:46:715 PDT [DEBUG] wire - >> "<?xml version='1.0' encoding='UTF-8'?><search xmlns="http://marklogic.com/appservices/search"><qtext>test1</qtext><sparql>select ?s ?p ?o { ?s ?p ?o } limit 100</sparql></search>" 2017/05/05 10:55:46:715 PDT [DEBUG] wire - >> "[\r][\n]" 2017/05/05 10:55:46:715 PDT [DEBUG] wire - >> "0[\r][\n]" 2017/05/05 10:55:46:715 PDT [DEBUG] wire - >> "[\r][\n]" 2017/05/05 10:55:46:723 PDT [DEBUG] wire - << "HTTP/1.1 200 OK[\r][\n]" 2017/05/05 10:55:46:723 PDT [DEBUG] wire - << "Content-type: application/json; charset=UTF-8[\r][\n]" 2017/05/05 10:55:46:723 PDT [DEBUG] wire - << "ML-Effective-Timestamp: 14940069467011536[\r][\n]" 2017/05/05 10:55:46:723 PDT [DEBUG] wire - << "Server: MarkLogic[\r][\n]" 2017/05/05 10:55:46:723 PDT [DEBUG] wire - << "Content-Length: 199[\r][\n]" 2017/05/05 10:55:46:723 PDT [DEBUG] wire - << "Connection: Keep-Alive[\r][\n]" 2017/05/05 10:55:46:723 PDT [DEBUG] wire - << "Keep-Alive: timeout=5[\r][\n]" 2017/05/05 10:55:46:723 PDT [DEBUG] wire - << "[\r][\n]" 2017/05/05 10:55:46:723 PDT [DEBUG] DefaultClientConnection - Receiving response: HTTP/1.1 200 OK 2017/05/05 10:55:46:723 PDT [DEBUG] DefaultHttpClient - Connection can be kept alive for 5000 MILLISECONDS 2017/05/05 10:55:46:723 PDT [DEBUG] wire - << "{"head":{"vars":["s","p","o"]},"results":{"bindings":[{"s":{"type":"uri","value":"http://example.org/s1"},"p":{"type":"uri","value":"http://example.org/p1"},"o":{"type":"literal","value":"test1"}}]}}" 2017/05/05 10:55:46:744 PDT [DEBUG] ThreadSafeClientConnManager - Released connection is reusable. 2017/05/05 10:55:46:744 PDT [DEBUG] ConnPoolByRoute - Releasing connection [{}->http://localhost:8011][null] 2017/05/05 10:55:46:744 PDT [DEBUG] ConnPoolByRoute - Pooling connection [{}->http://localhost:8011][null]; keep alive for 5000 MILLISECONDS 2017/05/05 10:55:46:744 PDT [DEBUG] ConnPoolByRoute - Notifying no-one, there are no waiting threads 10:55:46.762 [main] DEBUG c.m.client.impl.JerseyServices - Posting /graphs/sparql 2017/05/05 10:55:46:763 PDT [DEBUG] ThreadSafeClientConnManager - Get connection: {}->http://localhost:8011, timeout = 0 2017/05/05 10:55:46:763 PDT [DEBUG] ConnPoolByRoute - [{}->http://localhost:8011] total kept alive: 1, total issued: 0, total allocated: 1 out of 200 2017/05/05 10:55:46:763 PDT [DEBUG] ConnPoolByRoute - Getting free connection [{}->http://localhost:8011][null] 2017/05/05 10:55:46:763 PDT [DEBUG] DefaultHttpClient - Stale connection check 2017/05/05 10:55:46:764 PDT [DEBUG] RequestAddCookies - CookieSpec selected: ignoreCookies 2017/05/05 10:55:46:764 PDT [DEBUG] RequestAuthCache - Auth cache not set in the context 2017/05/05 10:55:46:764 PDT [DEBUG] RequestProxyAuthentication - Proxy auth state: UNCHALLENGED 2017/05/05 10:55:46:764 PDT [DEBUG] DefaultHttpClient - Attempt 1 to execute request 2017/05/05 10:55:46:764 PDT [DEBUG] DefaultClientConnection - Sending request: POST /v1/graphs/sparql?default-rulesets=exclude&collection=SPARQLManagerTest.testConstrainingQueries HTTP/1.1 2017/05/05 10:55:46:764 PDT [DEBUG] wire - >> "POST /v1/graphs/sparql?default-rulesets=exclude&collection=SPARQLManagerTest.testConstrainingQueries HTTP/1.1[\r][\n]" 2017/05/05 10:55:46:764 PDT [DEBUG] wire - >> "Content-Type: application/xml[\r][\n]" 2017/05/05 10:55:46:764 PDT [DEBUG] wire - >> "Accept: application/json[\r][\n]" 2017/05/05 10:55:46:764 PDT [DEBUG] wire - >> "ML-Agent-ID: java[\r][\n]" 2017/05/05 10:55:46:764 PDT [DEBUG] wire - >> "Authorization: Digest username="rest-writer",realm="public",nonce="8ff3995ac607275a97c77723d20c1f15",opaque="1a0feacf356abab0",qop=auth,uri="/v1/graphs/sparql",cnonce="6acb5ff8",nc=00000006,response="f91f0b721156e19742eecbfcdc426f8b"[\r][\n]" 2017/05/05 10:55:46:765 PDT [DEBUG] wire - >> "Transfer-Encoding: chunked[\r][\n]" 2017/05/05 10:55:46:765 PDT [DEBUG] wire - >> "Host: localhost:8011[\r][\n]" 2017/05/05 10:55:46:765 PDT [DEBUG] wire - >> "Connection: Keep-Alive[\r][\n]" 2017/05/05 10:55:46:765 PDT [DEBUG] wire - >> "[\r][\n]" 2017/05/05 10:55:46:765 PDT [DEBUG] wire - >> "214[\r][\n]" 2017/05/05 10:55:46:765 PDT [DEBUG] wire - >> "<?xml version='1.0' encoding='UTF-8'?><search xmlns="http://marklogic.com/appservices/search"><sparql>select ?s ?p ?o { ?s ?p ?o } limit 100</sparql><query xmlns="http://marklogic.com/appservices/search" xmlns:search="http://marklogic.com/appservices/search" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"><and-query><term-query><text>test2</text></term-query><value-query type="string"><element ns="" name="test2"/><text>testValue</text></value-query></and-query></query></search>" 2017/05/05 10:55:46:765 PDT [DEBUG] wire - >> "[\r][\n]" 2017/05/05 10:55:46:765 PDT [DEBUG] wire - >> "0[\r][\n]" 2017/05/05 10:55:46:765 PDT [DEBUG] wire - >> "[\r][\n]" 2017/05/05 10:55:46:772 PDT [DEBUG] wire - << "HTTP/1.1 200 OK[\r][\n]" 2017/05/05 10:55:46:772 PDT [DEBUG] wire - << "Content-type: application/json; charset=UTF-8[\r][\n]" 2017/05/05 10:55:46:772 PDT [DEBUG] wire - << "ML-Effective-Timestamp: 14940069467011536[\r][\n]" 2017/05/05 10:55:46:772 PDT [DEBUG] wire - << "Server: MarkLogic[\r][\n]" 2017/05/05 10:55:46:772 PDT [DEBUG] wire - << "Content-Length: 342[\r][\n]" 2017/05/05 10:55:46:772 PDT [DEBUG] wire - << "Connection: Keep-Alive[\r][\n]" 2017/05/05 10:55:46:772 PDT [DEBUG] wire - << "Keep-Alive: timeout=5[\r][\n]" 2017/05/05 10:55:46:772 PDT [DEBUG] wire - << "[\r][\n]" 2017/05/05 10:55:46:773 PDT [DEBUG] DefaultClientConnection - Receiving response: HTTP/1.1 200 OK 2017/05/05 10:55:46:773 PDT [DEBUG] DefaultHttpClient - Connection can be kept alive for 5000 MILLISECONDS 2017/05/05 10:55:46:773 PDT [DEBUG] wire - << "{"head":{"vars":["s","p","o"]},"results":{"bindings":[{"s":{"type":"uri","value":"http://example.org/s1"},"p":{"type":"uri","value":"http://example.org/p1"},"o":{"type":"literal","value":"test1"}},{"s":{"type":"uri","value":"http://example.org/s2"},"p":{"type":"uri","value":"http://example.org/p2"},"o":{"type":"literal","value":"test2"}}]}}" 2017/05/05 10:55:46:774 PDT [DEBUG] ThreadSafeClientConnManager - Released connection is reusable. 2017/05/05 10:55:46:774 PDT [DEBUG] ConnPoolByRoute - Releasing connection [{}->http://localhost:8011][null] 2017/05/05 10:55:46:774 PDT [DEBUG] ConnPoolByRoute - Pooling connection [{}->http://localhost:8011][null]; keep alive for 5000 MILLISECONDS 2017/05/05 10:55:46:774 PDT [DEBUG] ConnPoolByRoute - Notifying no-one, there are no waiting threads 10:55:46.781 [main] DEBUG c.m.client.impl.JerseyServices - Deleting graphs 2017/05/05 10:55:46:783 PDT [DEBUG] ThreadSafeClientConnManager - Get connection: {}->http://localhost:8011, timeout = 0 2017/05/05 10:55:46:783 PDT [DEBUG] ConnPoolByRoute - [{}->http://localhost:8011] total kept alive: 1, total issued: 0, total allocated: 1 out of 200 2017/05/05 10:55:46:783 PDT [DEBUG] ConnPoolByRoute - Getting free connection [{}->http://localhost:8011][null] 2017/05/05 10:55:46:783 PDT [DEBUG] DefaultHttpClient - Stale connection check 2017/05/05 10:55:46:784 PDT [DEBUG] RequestAddCookies - CookieSpec selected: ignoreCookies 2017/05/05 10:55:46:784 PDT [DEBUG] RequestAuthCache - Auth cache not set in the context 2017/05/05 10:55:46:784 PDT [DEBUG] RequestProxyAuthentication - Proxy auth state: UNCHALLENGED 2017/05/05 10:55:46:784 PDT [DEBUG] DefaultHttpClient - Attempt 1 to execute request 2017/05/05 10:55:46:784 PDT [DEBUG] DefaultClientConnection - Sending request: DELETE /v1/graphs?graph=http://marklogic.com/java/SPARQLManagerTest HTTP/1.1 2017/05/05 10:55:46:784 PDT [DEBUG] wire - >> "DELETE /v1/graphs?graph=http://marklogic.com/java/SPARQLManagerTest HTTP/1.1[\r][\n]" 2017/05/05 10:55:46:784 PDT [DEBUG] wire - >> "ML-Agent-ID: java[\r][\n]" 2017/05/05 10:55:46:784 PDT [DEBUG] wire - >> "Authorization: Digest username="rest-writer",realm="public",nonce="8ff3995ac607275a97c77723d20c1f15",opaque="1a0feacf356abab0",qop=auth,uri="/v1/graphs",cnonce="f43e4aa4",nc=00000007,response="fd3defccd442cbd458a8aa0c9629f381"[\r][\n]" 2017/05/05 10:55:46:784 PDT [DEBUG] wire - >> "Host: localhost:8011[\r][\n]" 2017/05/05 10:55:46:784 PDT [DEBUG] wire - >> "Connection: Keep-Alive[\r][\n]" 2017/05/05 10:55:46:784 PDT [DEBUG] wire - >> "[\r][\n]" 2017/05/05 10:55:46:787 PDT [DEBUG] wire - << "HTTP/1.1 204 Updated[\r][\n]" 2017/05/05 10:55:46:787 PDT [DEBUG] wire - << "Server: MarkLogic[\r][\n]" 2017/05/05 10:55:46:787 PDT [DEBUG] wire - << "Content-Length: 0[\r][\n]" 2017/05/05 10:55:46:787 PDT [DEBUG] wire - << "Connection: Keep-Alive[\r][\n]" 2017/05/05 10:55:46:787 PDT [DEBUG] wire - << "Keep-Alive: timeout=5[\r][\n]" 2017/05/05 10:55:46:787 PDT [DEBUG] wire - << "[\r][\n]" 2017/05/05 10:55:46:787 PDT [DEBUG] DefaultClientConnection - Receiving response: HTTP/1.1 204 Updated 2017/05/05 10:55:46:788 PDT [DEBUG] DefaultHttpClient - Connection can be kept alive for 5000 MILLISECONDS 2017/05/05 10:55:46:788 PDT [DEBUG] ThreadSafeClientConnManager - Released connection is reusable. 2017/05/05 10:55:46:788 PDT [DEBUG] ConnPoolByRoute - Releasing connection [{}->http://localhost:8011][null] 2017/05/05 10:55:46:788 PDT [DEBUG] ConnPoolByRoute - Pooling connection [{}->http://localhost:8011][null]; keep alive for 5000 MILLISECONDS 2017/05/05 10:55:46:788 PDT [DEBUG] ConnPoolByRoute - Notifying no-one, there are no waiting threads

Updated 05/06/2017 20:28

Discrepancy in behavior when one of the node is shutdown

marklogic/java-client-api

From https://github.com/marklogic/data-movement/issues/59

@srinathgit: Scenario 1 :

Three node cluster, having a forest on each node. Make one of the forest disabled Start the job Stop the node on which the aforementioned forest resides when insert is happening

Documents are written to the host. The host that has forest disabled routes the document to other hosts where forest is available. When the node having the disabled forest is brought down, the batch to be written to the host having disabled forest are marked as failure. The database goes to unavailable state immediately and no more inserts take place until the node is brought back up.

Scenario 2:

Three node cluster, having a forest on each node. Make one of the forest offline Start the job Stop the node on which the aforementioned forest resides when insert is happening

Documents are written to the host. The host that has forest offline routes the document to other hosts where forest is available. When the node having the offline forest is brought down, the batches of document to be written to the host having offline forest are marked as failure while documents are written to the other nodes. This continues for a period of roughly 90 seconds after the node is brought down. After that no more inserts take place until the node is brought back up. Hence a lot more batches fail during the interim period. 1. This may be due to the behavior of server. I just want to confirm if this is expected. 2. Also in either case, the DMSDK application indefinitely hangs (it was alive for atleast 20 minutes when I checked) with stack trace of main thread is provided below waiting for a response from the server.

Name: main
State: RUNNABLE
Total blocked: 0  Total waited: 22

Stack trace: 
java.net.SocketInputStream.socketRead0(Native Method)
java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
java.net.SocketInputStream.read(SocketInputStream.java:170)
java.net.SocketInputStream.read(SocketInputStream.java:141)
org.apache.http.impl.io.AbstractSessionInputBuffer.fillBuffer(AbstractSessionInputBuffer.java:149)
org.apache.http.impl.io.SocketInputBuffer.fillBuffer(SocketInputBuffer.java:110)
org.apache.http.impl.io.AbstractSessionInputBuffer.readLine(AbstractSessionInputBuffer.java:260)
org.apache.http.impl.conn.LoggingSessionInputBuffer.readLine(LoggingSessionInputBuffer.java:115)
org.apache.http.impl.conn.DefaultResponseParser.parseHead(DefaultResponseParser.java:98)
org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:252)
org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:281)
org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:247)
org.apache.http.impl.conn.AbstractClientConnAdapter.receiveResponseHeader(AbstractClientConnAdapter.java:219)
org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:298)
org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:125)
org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:633)
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:454)
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:820)
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:776)
com.sun.jersey.client.apache4.ApacheHttpClient4Handler.handle(ApacheHttpClient4Handler.java:170)
com.marklogic.client.impl.DigestChallengeFilter.handle(DigestChallengeFilter.java:34)
com.sun.jersey.api.client.filter.HTTPDigestAuthFilter.handle(HTTPDigestAuthFilter.java:493)
com.sun.jersey.api.client.Client.handle(Client.java:648)
com.sun.jersey.api.client.WebResource.handle(WebResource.java:680)
com.sun.jersey.api.client.WebResource.access$200(WebResource.java:74)
com.sun.jersey.api.client.WebResource$Builder.post(WebResource.java:568)
com.marklogic.client.impl.JerseyServices.doPost(JerseyServices.java:4338)
com.marklogic.client.impl.JerseyServices.postResource(JerseyServices.java:3564)
com.marklogic.client.impl.JerseyServices.postBulkDocuments(JerseyServices.java:3677)
com.marklogic.client.impl.DocumentManagerImpl.write(DocumentManagerImpl.java:564)
com.marklogic.datamovement.impl.WriteHostBatcherImpl.flushBatch(WriteHostBatcherImpl.java:164)
   - locked com.marklogic.datamovement.impl.WriteHostBatcherImpl@4a4a492
com.marklogic.datamovement.impl.WriteHostBatcherImpl.add(WriteHostBatcherImpl.java:105)
   - locked com.marklogic.datamovement.impl.BatchWriteSet@767e8f3c
com.marklogic.datamovement.impl.WriteHostBatcherImpl.add(WriteHostBatcherImpl.java:89)
com.marklogic.datamovement.functionaltests.WriteHostBatcherTest.testDisableDBDuringInsert(WriteHostBatcherTest.java:1684)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:498)
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
org.junit.runners.ParentRunner.run(ParentRunner.java:309)
org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:459)
org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:675)
org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:382)
org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:192) 
Updated 31/07/2017 23:14

Denormalized sub-entities

marklogic/entity-services

When putting entity instances into documents, one always chooses one to be the root, and others to embed or denormalize into the document structure.

Entity Services in MarkLogic 9 does not provide this vocabulary, but repeated use of the code generation indicates that it could be automated significantly with this extra concept.

  1. Entry point of a module is known.
  2. Expectations of whether a reference property contains an embedded instance or a reference to an external one.
  3. Refinement of extraction template to require less customization.
Updated 06/07/2017 17:31 3 Comments

Handling NPE withBlackList and withWhiteList methods

marklogic/java-client-api

Handling NPE withBlackList and withWhiteList methods

Test:

@Test
    public void testWhiteList() throws Exception{
        Assume.assumeTrue(hostNames.length > 1);

        final String query1 = "fn:count(fn:doc())";

        try{
            DocumentMetadataHandle meta6 = new DocumentMetadataHandle().withCollections("NoHost").withQuality(0);

            Assert.assertTrue(dbClient.newServerEval().xquery(query1).eval().next().getNumber().intValue() == 0);

            WriteBatcher ihb2 =  dmManager.newWriteBatcher();

            FilteredForestConfiguration forestConfig = new FilteredForestConfiguration(dmManager.readForestConfig())
                            .withBlackList(null);
}

Exception:

java.lang.NullPointerException
    at com.marklogic.client.datamovement.FilteredForestConfiguration.withBlackList(FilteredForestConfiguration.java:172)
    at com.marklogic.client.datamovement.functionaltests.WriteHostBatcherTest.testWhiteList(WriteHostBatcherTest.java:2651)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
    at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
    at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
    at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
    at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
    at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
    at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
    at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
    at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
    at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
    at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
    at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
    at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
    at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
    at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
    at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
    at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
    at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:86)
    at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
    at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:459)
    at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:675)
    at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:382)
    at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:192)
Updated 28/07/2017 21:01

Final touches on SDK for Release Candidate 1

RestComm/restcomm-ios-sdk

I quickly copied this from Android SDK, so many of them will have to change:

Review all aspects of the SDK and make sure it is GA quality. Some things to check: - API is in good shape - CI/CD facilities are complete and 100% functional from Travis CI - UI/Integration/Unit testing are complete with 100% passing tests, also automated testing in real devices (using external service like Amazon farm) would be great. - Interoperability testing between all SDKs would be a plus, but not sure if it can be realized yet, as the rest of the SDKs might not yet be ready. - Documentation is up to date and with some improvements so that it is easier to read (also all documentation should reside in this repo, not Restcomm-Connect): - Reference Documentation - Quick start guide - User guide for Olympus - Olympus App and Hello World sample app both functional. Olympus should be verified by UI tests, but not sure about Hello World -maybe we could introduce a rudimentary set of UI tests for this as well. - Libraries in RC should all be release type and as lightweight as possible - All dependencies are stored in Sonatype and visible to public. The repository should have no local dependencies.

Also do a round of very thorough testing: - Do some more testing on Notifications functionality + integrations with Android Contacts + Calls as they aren’t tested enough as far as I know - Test UI/UX aspects to fix any left overs that might have slipped our attention. - Manually test interoperability in various types of calls within Android Olympus: - Video -> Video - Video -> Audio - Audio -> Video - Audio -> Audio - Manually test interoperability between Android, iOS and Web Olympus

Updated 26/06/2017 18:15

Flr 294

LigaData/Kamanja

Used the class MetadataAPISerialization for generating Json corresponding GetAll API functions. For adapters, Json result will look something like as shown below

“Adapters”: [ { “Adapter”: { “Name”: “MedicalInput”, “TypeString”: “Input”, “ClassName”: “com.ligadata.kafkaInputOutputAdapters_v10.KamanjaKafkaConsumer$”, “JarName”: “kamanjakafkaadapters_0_10_2.11-1.6.1.jar”, “DependencyJars”: [ “kafka-clients-0.10.0.1.jar”, “KamanjaInternalDeps_2.11-1.6.1.jar”, “ExtDependencyLibs_2.11-1.6.1.jar”, “ExtDependencyLibs2_2.11-1.6.1.jar” ], “AdapterSpecificCfg”: “{\"HostList\”:\“localhost:9092\”,\“TopicName\”:\“medicalinput\”}“, "TenantId”: “tenant1”, “FullAdapterConfig”: “{\"TenantId\”:\“tenant1\”,\“DependencyJars\”:[\“kafka-clients-0.10.0.1.jar\”,\“KamanjaInternalDeps_2.11-1.6.1.jar\”,\“ExtDependencyLibs_2.11-1.6.1.jar\”,\“ExtDependencyLibs2_2.11-1.6.1.jar\”],\“ClassName\”:\“com.ligadata.kafkaInputOutputAdapters_v10.KamanjaKafkaConsumer$\”,\“Name\”:\“MedicalInput\”,\“AdapterSpecificCfg\”:{\“HostList\”:\“localhost:9092\”,\“TopicName\”:\“medicalinput\”},\“TypeString\”:\“Input\”,\“JarName\”:\“kamanjakafkaadapters_0_10_2.11-1.6.1.jar\”}“ } }, { "Adapter”: { “Name”: “Storage_1”, “TypeString”: “Storage”, “ClassName”: “”, “JarName”: “”, “DependencyJars”: [], “AdapterSpecificCfg”: “”, “TenantId”: “tenant1”, “FullAdapterConfig”: “{\"TenantId\”:\“tenant1\”,\“Location\”:\“/media/home3/installKamanja161/Kamanja-1.6.1_2.11/storage/tenant1_storage_1\”,\“Name\”:\“Storage_1\”,\“portnumber\”:\“9100\”,\“StoreType\”:\“h2db\”,\“TypeString\”:\“Storage\”,\“connectionMode\”:\“embedded\”,\“SchemaName\”:\“testdata\”,\“user\”:\“test\”,\“password\”:\“test\”}“ } },

Updated 25/06/2017 05:55

Issue1357 161

LigaData/Kamanja

I have addressed the issue 1.5.1 and somehow it didn’t make it to later versions. I have merged the changes into 1.6.1 branch and done some unit-testing. Let me know If I need to improve the messages further.

Updated 25/06/2017 06:28

Issue1123

LigaData/Kamanja

For Java/Scala Models, The package name specified within the model code should match the nameSpace of modelName. Otherwise, fail the Add/Update Model operations

Updated 25/06/2017 06:28

EvalTest fails in develop branch.

marklogic/java-client-api

The following test fails in the nightly regression run in the EA4 server build. I don’t see this test failing on Jenkins environment, where the server build used is 9.0 trunk.

<testsuite tests="6" failures="1" name="com.marklogic.client.test.EvalTest" time="4.448" errors="0" skipped="0">
...
...

<testcase classname="com.marklogic.client.test.EvalTest" name="test_582_need_privilege" time="0.006">
    <failure message="a FailedRequestException should have been thrown since rest_admin doesn't have eval privileges" type="java.lang.AssertionError">java.lang.AssertionError: a FailedRequestException should have been thrown since rest_admin doesn't have eval privileges
    at org.junit.Assert.fail(Assert.java:88)
    at com.marklogic.client.test.EvalTest.test_582_need_privilege(EvalTest.java:463)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
    at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
    at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
    at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
    at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
    at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
    at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
    at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
    at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
    at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
    at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
    at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
    at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
    at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
    at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
    at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
    at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
    at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
    at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
    at org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
    at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
    at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)
</failure>
Updated 15/06/2017 00:57 7 Comments

Job not getting stopped when number of available hosts < 'minHosts' property

marklogic/java-client-api

This issue was observed with a specific forest configuration described below: A. This test was run on a 3 node cluster (rh7v-intel64-90-test-4/5/6.marklogic.com) with a forest (WriteBatcher-1,2,3) on each of the node which are associated with a db. ‘WriteBatcher-1’ is not configured for failover. ‘WriteBatcher-3’ is configured to fail over to host ‘rh7v-intel64-90-test-5.marklogic.com’. ‘WriteBatcher-2’ is configured to fail over to host ‘rh7v-intel64-90-test-4.marklogic.com’

  1. Now when ‘ihb2’ WB job is getting executed,nodes rh7v-intel64-90-test-6.marklogic.com is first stopped .

21:16:24.885 [main] ERROR c.m.c.d.HostAvailabilityListener - ERROR: host unavailable “rh7v-intel64-90-test-6.marklogic.com”, black-listing it for PT15S The forest fails over to ‘rh7v-intel64-90-test-5.marklogic.com’. The writing of document to db resumes once the failover is complete.

  1. Now ‘rh7v-intel64-90-test-5.marklogic.com’ is stopped. It gets blacklisted 21:17:02.508 [main] ERROR c.m.c.d.HostAvailabilityListener - ERROR: host unavailable “rh7v-intel64-90-test-5”, black-listing it for PT15S

  2. After that, the job is stopped as available hosts < minHosts

21:17:02.772 [pool-1-thread-1] ERROR c.m.c.d.HostAvailabilityListener - Encountered [com.sun.jersey.api.client.ClientHandlerException: org.apache.http.NoHttpResponseException: The target server failed to respond] on host “rh7v-intel64-90-test-5.marklogic.com” but black-listing it would drop job below minHosts (2), so stopping job “unnamed”.

  1. After that , retrying of failed batches keeps running infinitely

21:17:02.550 [main] WARN c.m.c.d.HostAvailabilityListener - Retrying failed batch: 132, results so far: 2640, uris: [/local/ABC-2620, /local/ABC-2621, /local/ABC-2622, /local/ABC-2623, /local/ABC-2624, /local/ABC-2625, /local/ABC-2626, /local/ABC-2627, /local/ABC-2628, /local/ABC-2629, /local/ABC-2630, /local/ABC-2631, /local/ABC-2632, /local/ABC-2633, /local/ABC-2634, /local/ABC-2635, /local/ABC-2636, /local/ABC-2637, /local/ABC-2638, /local/ABC-2639]

  1. The client process was killed after sometime and the client logs and stack trace have been attached. Client log Stack trace

Test:

@Test
public void testFailOver() throws Exception{
    try{
        final String query1 = "fn:count(fn:doc())";

        final AtomicInteger successCount = new AtomicInteger(0);

        final MutableBoolean failState = new MutableBoolean(false);
        final AtomicInteger failCount = new AtomicInteger(0);

        WriteBatcher ihb2 =  dmManager.newWriteBatcher();
        ihb2.withBatchSize(20);
        //ihb2.withThreadCount(120);


        ihb2.setBatchFailureListeners(
                  new HostAvailabilityListener(dmManager)
                    .withSuspendTimeForHostUnavailable(Duration.ofSeconds(15))
                    .withMinHosts(2)
                );  
        ihb2.onBatchSuccess(
               batch -> {

                    successCount.addAndGet(batch.getItems().length);
                    System.out.println("Success Host: "+ batch.getClient().getHost());
                    System.out.println("Success batch number: "+ batch.getJobBatchNumber());
                     System.out.println("Success Job writes so far: "+ batch.getJobWritesSoFar());
                  }
                )
                .onBatchFailure(
                  (batch, throwable) -> {
                      System.out.println("Failed batch number: "+ batch.getJobBatchNumber());
                      /*try{
                          System.out.println("Retrying batch: "+ batch.getJobBatchNumber());
                          ihb2.retry(batch);
                      }
                     catch(Exception e){
                         System.out.println("Retry of batch "+ batch.getJobBatchNumber()+ " failed");
                         e.printStackTrace();
                     }*/

                      throwable.printStackTrace();
                      failState.setTrue();
                      failCount.addAndGet(batch.getItems().length);
                  });


        dmManager.startJob(ihb2);    

        for (int j =0 ;j < 20000; j++){
            String uri ="/local/ABC-"+ j;
            ihb2.add(uri, stringHandle);
        }


        ihb2.flushAndWait();


        System.out.println("Fail : "+failCount.intValue());
        System.out.println("Success : "+successCount.intValue());
        System.out.println("Count : "+ dbClient.newServerEval().xquery(query1).eval().next().getNumber().intValue());

        Assert.assertTrue(dbClient.newServerEval().xquery(query1).eval().next().getNumber().intValue()==20000);

    }
    catch(Exception e){
        e.printStackTrace();
    }
}
Updated 31/07/2017 23:09 8 Comments

Job hangs when available hosts < min hosts instead of terminating

marklogic/java-client-api
  1. This test was run on a 3 node cluster (rh7v-intel64-90-test-4/5/6.marklogic.com) with a forest (ApplyTransform1,2,3) on each of the node associated with a db.
  2. Now when ‘batcher ’ QB job is getting executed,nodes rh7v-intel64-90-test-5/6.marklogic.com are stopped one after the other. new HostAvailabilityListener(dmManager) .withSuspendTimeForHostUnavailable(Duration.ofSeconds(15)) .withMinHosts(2)

C.From the log, it can be seen that rh7v-intel64-90-test-6.marklogic.com is black listed but once rh7v-intel64-90-test-5.marklogic.com is stopped , it is not black listed. D. Instead,the process hangs forever. The stack trace and client logs are attached.

Client log jstack.txt

    @Test
    public void xQueryMasstransformReplace() throws Exception{

        WriteBatcher ihb2 =  dmManager.newWriteBatcher();
        ihb2.withBatchSize(27).withThreadCount(10);
        ihb2.onBatchSuccess(
                batch -> {


                }
                )
        .onBatchFailure(
                (batch, throwable) -> {
                    throwable.printStackTrace();
                });

        dmManager.startJob(ihb2);

        for (int j =0 ;j < 2000; j++){
            String uri ="/local/string-"+ j;
            ihb2.add(uri, meta2, stringHandle);
        }

        ihb2.flushAndWait();

        ServerTransform transform = new ServerTransform("add-attr-xquery-transform");
        transform.put("name", "Lang");
        transform.put("value", "English");

        AtomicInteger skipped = new AtomicInteger(0);
        AtomicInteger success = new AtomicInteger(0);
        AtomicInteger failure = new AtomicInteger(0);

        ApplyTransformListener listener = new ApplyTransformListener()
                .withTransform(transform)
                .withApplyResult(ApplyResult.REPLACE)
                .onSuccess(batch -> {
                    success.addAndGet(batch.getItems().length);
                }). 
                onBatchFailure((batch, throwable) -> {
                    failure.addAndGet(batch.getItems().length);
                    throwable.printStackTrace();
                }).onSkipped(batch -> {
                    skipped.addAndGet(batch.getItems().length);

                });

        QueryBatcher batcher = dmManager.newQueryBatcher(new StructuredQueryBuilder().collection("XmlTransform"))
                .onUrisReady(listener).withBatchSize(7);
        batcher.setQueryFailureListeners(
                  new HostAvailabilityListener(dmManager)
                    .withSuspendTimeForHostUnavailable(Duration.ofSeconds(15))
                    .withMinHosts(2)
                );  
        JobTicket ticket = dmManager.startJob( batcher );
        batcher.awaitCompletion();
        dmManager.stopJob(ticket);
        System.out.println("Success "+ success.intValue());
        System.out.println("Failure "+failure.intValue());
        String uris[] = new String[2000];
        for(int i =0;i<2000;i++){
            uris[i] = "/local/string-"+ i;
        }
        int count=0;
        DocumentPage page = dbClient.newDocumentManager().read(uris);
        DOMHandle dh = new DOMHandle();
        while(page.hasNext()){
            DocumentRecord rec = page.next();
            rec.getContent(dh);
            assertTrue("Element has attribure ? :",dh.get().getElementsByTagName("foo").item(0).hasAttributes());
            assertEquals("Attribute value should be English","English",dh.get().getElementsByTagName("foo").item(0).getAttributes().item(0).getNodeValue());
            count++;
        }

        assertEquals("document count", 2000,count); 
        assertEquals("document count", 2000,success.intValue()); 
        assertEquals("document count", 0,skipped.intValue()); 
    }
Updated 31/07/2017 23:12 4 Comments

Tests fail if a go-ipfs daemon is already running

ipfs/go-ipfs

Version information:

Current master: https://github.com/ipfs/go-ipfs/commit/a542dea5d

Type:

Bug

Priority:

P1

Description:

If an ipfs daemon is active TEST_NO_FUSE=1 make test fails like this ( tested only under the same user account, not sure what happens if the daemon runs under a dedicated/separate user but still occupies the usual ports )

Updated 28/07/2017 23:47 1 Comments

HostAvailabilityListener with QueryBatcher results in incorrect behavior

marklogic/java-client-api
  1. The test used here is the same as reference in #568 . It queries all documents in collection “Replace Snapshot”, applies transform on them and then finally deletes the document in a single QueryBatcher instance.
  2. This test was run on a 3 node cluster (rh7v-intel64-90-test-4/5/6.marklogic.com) with a forest (ApplyTransform1,2,3) on each of the node associated with a db.
  3. The forest on node rh7v-intel64-90-test-6.marklogic.com has been configured for shared disk failover to failover host rh7v-intel64-90-test-4.marklogic.com.
  4. Now when ‘batcher ’ QB job is getting executed,node rh7v-intel64-90-test-6.marklogic.com is stopped.
  5. After 30 seconds, the forest fails over to rh7v-intel64-90-test-4.marklogic.com and db once again becomes available
  6. But from the logs, it can be seen that the document with uri “/local/snapshot-0” has not been deleted from the db (it was confirmed that the document was available using QConsole as well).

Client log

Test:

java.lang.AssertionError
    at org.junit.Assert.fail(Assert.java:86)
    at org.junit.Assert.assertTrue(Assert.java:41)
    at org.junit.Assert.assertFalse(Assert.java:64)
    at org.junit.Assert.assertFalse(Assert.java:74)
    at com.marklogic.client.datamovement.functionaltests.ApplyTransformTest.jsMasstransformReplaceDelete(ApplyTransformTest.java:669)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
    at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
    at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
    at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
    at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
    at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
    at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
    at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
    at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
    at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
    at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
    at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
    at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
    at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
    at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
    at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
    at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
    at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:86)
    at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
    at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:459)
    at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:675)
    at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:382)
    at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:192)
Updated 31/07/2017 23:18 1 Comments

Process hang with QueryBatcher during forest failover

marklogic/java-client-api
  1. The following test is performed on 3 node cluster with a db associated with 3 forests. 18:07:20.277 [main] INFO c.m.c.d.impl.WriteBatcherImpl - (withForestConfig) Using [rh7v-intel64-90-test-5.marklogic.com, rh7v-intel64-90-test-6.marklogic.com, rh7v-intel64-90-test-4.marklogic.com] hosts with forests for "ApplyTransform" 18:07:20.807 [main] INFO c.m.c.d.impl.WriteBatcherImpl - Adding DatabaseClient on port 8000 for host "rh7v-intel64-90-test-5.marklogic.com" to the rotation 18:07:20.807 [main] DEBUG c.m.client.impl.JerseyServices - Connecting to rh7v-intel64-90-test-6.marklogic.com at 8000 as admin 18:07:21.166 [main] INFO c.m.c.d.impl.WriteBatcherImpl - Adding DatabaseClient on port 8000 for host "rh7v-intel64-90-test-6.marklogic.com" to the rotation 18:07:21.166 [main] INFO c.m.c.d.impl.WriteBatcherImpl - Adding DatabaseClient on port 8000 for host "rh7v-intel64-90-test-4.marklogic.com" to the rotation
  2. When transformation is taking place, one of the host “rh7v-intel64-90-test-6.marklogic.com” is stopped 21:40:26.896 [pool-1-thread-1] ERROR c.m.c.d.HostAvailabilityListener - ERROR: host unavailable "rh7v-intel64-90-test-6.smarklogic.com", black-listing it for PT50S Server Error log:
2016-11-27 18:10:24.170 Info: Stopping XDQPServerConnection, client=rh7v-intel64-90-test-6.marklogic.com, conn=172.18.132.40:7999-172.18.132.42:4380
  1. Before Forest failover occurs (which takes place in 30 seconds after node shutdown) , node “rh7v-intel64-90-test-6.marklogic.com” is restarted.

Error Log: 2016-11-27 18:10:38.135 Info: Starting XDQPClientConnection, server=rh7v-intel64-90-test-6.marklogic.com, conn=172.18.132.40:38312-172.18.132.42:7999 2016-11-27 18:10:38.138 Info: Starting XDQPClientConnection, server=rh7v-intel64-90-test-6.marklogic.com, conn=172.18.132.40:38314-172.18.132.42:7999 2016-11-27 18:10:38.139 Debug: Retrying AppRequestTask::handleEvalLocked apply-transform.xqy 4333219226342543433 Update 11 because XDMP-XDQPDISC: XDQP connection disconnected, server=rh7v-intel64-90-test-6.marklogic.com 2016-11-27 18:10:38.142 Info: Starting domestic XDQPServerConnection, client=rh7v-intel64-90-test-6.marklogic.com, conn=172.18.132.40:7999-172.18.132.42:4388 2016-11-27 18:10:42.942 Info: Mounted forest ApplyTransform-3 remotely on rh7v-intel64-90-test-6.marklogic.com 2016-11-27 18:10:45.494 Info: Database ApplyTransform is online with 3 forests D. After 15 seconds, elapse, the following message is seen in the client log.

18:10:44.378 [pool-3-thread-1] INFO  c.m.c.d.HostAvailabilityListener - it's been PT15S since host rh7v-intel64-90-test-6.marklogic.com failed, opening communication to all server hosts [[rh7v-intel64-90-test-4.marklogic.com, rh7v-intel64-90-test-5.marklogic.com, rh7v-intel64-90-test-6.marklogic.com]]
18:10:44.378 [pool-3-thread-1] INFO  c.m.c.d.impl.QueryBatcherImpl - (withForestConfig) Using [rh7v-intel64-90-test-5.marklogic.com, rh7v-intel64-90-test-6.marklogic.com, rh7v-intel64-90-test-4.marklogic.com] hosts with forests for "ApplyTransform"
  1. Server is stopped again and now forest failover takes place Error log: 2016-11-27 18:10:48.319 Info: Stopping XDQPServerConnection, client=rh7v-intel64-90-test-6.marklogic.com, conn=172.18.132.40:7999-172.18.132.42:4398, requests=0, recvTics=0, sendTics=1, recvs=566, sends=517, recvBytes=87352, sendBytes=204140 2016-11-27 18:11:19.971 Info: Database ApplyTransform is offline 2016-11-27 18:11:19.974 Info: Unmounted forest ApplyTransform-3 because disconnected host 2016-11-27 18:11:19.982 Notice: Failing over forest ApplyTransform-3 because the host rh7v-intel64-90-test-6.marklogic.com has gone offline 2016-11-27 18:11:19.985 Info: Forest ApplyTransform-3 state changed from unmounted to mounted 2016-11-27 18:11:20.112 Info: Forest ApplyTransform-3 state changed from mounted to recovering 2016-11-27 18:11:20.118 Info: Forest ApplyTransform-3 state changed from recovering to open

  2. The client process has been hanging since 18:11:27.207 . The server was started later Error log: 2016-11-27 18:11:31.532 Info: Starting domestic XDQPServerConnection, client=rh7v-intel64-90-test-6.marklogic.com, conn=172.18.132.40:7999-172.18.132.42:4400

The entire client, server log file as well as stack trace taken at two different times are attached. exception.txt errorlog.txt jstack.txt jstack1.txt

I am guessing a similar scenario could occur with WriteBatcher as well during failover.

Test:

@Test
    public void xQueryMasstransformReplace() throws Exception{

        WriteBatcher ihb2 =  dmManager.newWriteBatcher();
        ihb2.withBatchSize(27).withThreadCount(10);
        ihb2.onBatchSuccess(
                batch -> {


                }
                )
        .onBatchFailure(
                (batch, throwable) -> {
                    throwable.printStackTrace();
                });

        dmManager.startJob(ihb2);

        for (int j =0 ;j < 2000; j++){
            String uri ="/local/string-"+ j;
            ihb2.add(uri, meta2, stringHandle);
        }

        ihb2.flushAndWait();

        ServerTransform transform = new ServerTransform("add-attr-xquery-transform");
        transform.put("name", "Lang");
        transform.put("value", "English");

        AtomicInteger skipped = new AtomicInteger(0);
        AtomicInteger success = new AtomicInteger(0);
        AtomicInteger failure = new AtomicInteger(0);

        ApplyTransformListener listener = new ApplyTransformListener()
                .withTransform(transform)
                .withApplyResult(ApplyResult.REPLACE)
                .onSuccess(batch -> {
                    success.addAndGet(batch.getItems().length);
                }). 
                onBatchFailure((batch, throwable) -> {
                    failure.addAndGet(batch.getItems().length);
                    throwable.printStackTrace();
                }).onSkipped(batch -> {
                    skipped.addAndGet(batch.getItems().length);

                });

        QueryBatcher batcher = dmManager.newQueryBatcher(new StructuredQueryBuilder().collection("XmlTransform"))
                .onUrisReady(listener).withBatchSize(7);
        batcher.setQueryFailureListeners(
                  new HostAvailabilityListener(dmManager)
                    .withSuspendTimeForHostUnavailable(Duration.ofSeconds(15))
                    .withMinHosts(2)
                );  
        JobTicket ticket = dmManager.startJob( batcher );
        batcher.awaitCompletion();
        dmManager.stopJob(ticket);
        System.out.println("Success "+ success.intValue());
        System.out.println("Failure "+failure.intValue());
        String uris[] = new String[2000];
        for(int i =0;i<2000;i++){
            uris[i] = "/local/string-"+ i;
        }
        int count=0;
        DocumentPage page = dbClient.newDocumentManager().read(uris);
        DOMHandle dh = new DOMHandle();
        while(page.hasNext()){
            DocumentRecord rec = page.next();
            rec.getContent(dh);
            assertTrue("Element has attribure ? :",dh.get().getElementsByTagName("foo").item(0).hasAttributes());
            assertEquals("Attribute value should be English","English",dh.get().getElementsByTagName("foo").item(0).getAttributes().item(0).getNodeValue());
            count++;
        }

        assertEquals("document count", 2000,count); 
        assertEquals("document count", 2000,success.intValue()); 
        assertEquals("document count", 0,skipped.intValue()); 
    }
Updated 31/07/2017 23:12 5 Comments

When message arrives when inside a call the call is hung up and the remote party remains in the call

RestComm/restcomm-android-sdk

Let’s first describe all the issues: 1. When text message arrives when inside a call the call is hung up and the remote party remains in the call 2. When inside an incoming call the back button is pressed, the call is hung up locally, but the remote party remains in the call (the same thing for outgoing calls works correctly)

Analysis: The main issue here, apart from the inconsistency of signaling, is that the only way to leave the Activity of the active call without shutting it down is to navigate outside the App. For more background please refer to #380

Updated 27/07/2017 16:52 1 Comments

Need better 404 page

uboslinux/ubos-admin

Two use cases: * user moves app from example.com/foo to example.com/bar. Client accesses example.com/foo. * user updates the device. During the update, client accesses an app that’s currently unavailable. In both cases, nice page(s) should be shown. This probably should use some kind of “match” in the Apache config.

Updated 12/06/2017 04:48 2 Comments

Refactor iOS SDK part I

RestComm/restcomm-ios-sdk

Let’s document the steps for the first iteration of refactoring (let’s add as we think better about that): - [ ] Allow the stack to be restarted when the user changes from TLS to non-TLS for signaling. This will allow to properly address https://github.com/RestComm/restcomm-ios-sdk/issues/476 - [ ] Don’t forget, after refactoring is done, to re-enable the functionality that allow to modify TLS preference

Updated 26/06/2017 18:21 1 Comments

Assemble model descriptors from graph

marklogic/entity-services

As a data steward, I want to be able to organize my entities into descriptor docs as I see fit, for example, to group related entities into their own descriptor documents or separate entities into one descriptor per entity or to have just one descriptor doc for all entities. Each grouping should distill into the same entity type graph. Today, the downstream artifact generators assume one in-memory map structure that represents the entire type model. This is a direct mapping from a single descriptor doc that represents all entities. It should be possible to assemble the in-memory descriptors needed from code gen regardless of how the input entity definitions are grouped.

One implementation idea is to provide a function that would operate on the triples to assemble a map/object. Someone extending the model would also use this same technique to build custom in-memory representations of their extensions.

Some things to consider: - Versioning today is declared at the descriptor. Thus all entities declared in a single descriptor will have the same semver. I’d expect stewards to want to version entities at different rates. - (Now I can’t remember the other thing)

Updated 24/07/2017 14:33 4 Comments

Whitelist issues with proxy mode

benbaptist/minecraft-wrapper

The vanilla server whitelists based on online UUID’s. Proxy mode requires the server to be in offline mode. The result is that a player who is whitelisted on the vanilla server cannot connect via proxy mode.

Possible code solutions: 1) have wrapper handle whitelisting 2) have wrapper modify the whitelist.json file with the offline UUID and allow the server to handle whitelists

Solution 1 would be preferred since it ensures name changes will not allow a player to become “unwhitelisted” due to a name change.

reported by @pingpong1109

Updated 09/07/2017 19:32 1 Comments

Referentially consistent export

marklogic/entity-services

Use the entity model to get a consistent subset of entities and their related entities that. For example, get me the 50 most recent customers, their related sales orders, products, etc. Handle cycles and, perhaps, truncate at some depth to avoid always getting the whole database. Export all reference data each time.

Updated 20/06/2017 17:09 6 Comments

Secure SIP setting update will fail if RCDevice has been previously initialized without certificates

RestComm/restcomm-ios-sdk

That happens because right now we need a Sofia restart to use updated certificates and at the time we update SIP settings via [RCDevice updateParams] this doesn’t happen.

I can see the following ways to go here: - Restart Sofia if there’s a secure settings in the parms and Sofia wasn’t initialized with security (not that good) - See if we can update Sofia’s configuration via nua_set_params() to avoid restarting - Always require certificates to be passed by the App, no matter if they actual use secure signaling or not

Updated 26/06/2017 17:15 1 Comments

Properly integrate the iOS reachability inside Sofia SIP

RestComm/restcomm-ios-sdk

This is part of the umbrella issue: https://github.com/RestComm/restcomm-ios-sdk/issues/414

That way the networking change is handled internally and as a result affect us much less. We could expose events from the API that will notify the App that reachability has changed. Some pointers: - With the current Sofia SIP codebase this is possible for Mac but not iOS. The main difference is that Mac provides the SCDynamicStore API for that which doesn’t work in iOS (check https://github.com/RestComm/restcomm-ios-sdk/blob/master/dependencies/sources/sofia-sip/libsofia-sip-ua/su/su_os_nw.c#L228). iOS provides the Reachability API that we currently invoke from RCDevice. So the idea would be to integrate the Apple Reachability API in su_os_nw.c above. - To that end I think a great pointer is a chromium fix that seems to do exactly the same thing https://chromiumcodereview.appspot.com/10829453/ (Bug title is ‘Add iOS support to the NetworkChangeNotifier’)

Things to keep in mind (that have bitten us in the past): - It’s wise to not allow for reachability changes to be handled while the App is in the background (and on it’s way to suspended). I’ve seen that it might lead to unpredictable behavior as the handler code might be interrupted in the middle by iOS. What I have done in BETA4 is to ignore such events if the App is on the background by guarding with if ([UIApplication sharedApplication].applicationState != UIApplicationStateBackground) { all state changes. We should probably retain this logic throughout any refactoring.

Updated 23/06/2017 13:52 2 Comments

Need to introduce separate API for error reporting of SIP MESSAGEs

RestComm/restcomm-ios-sdk

This issues is pretty old and will probably be obsolete after we implement #577. Let’s keep it around for verification purposes:

Right now we only have didStopListeningForIncomingConnections in the Device level which assumes that the device also stopped listening for connections which isn’t the case. Notice that this API (i.e. didStopListeningForIncomingConnections) is how Twilio does error reporting for their TCDevice object (check https://www.twilio.com/docs/api/client/ios/TCDeviceDelegate#deviceDidStartListeningForIncomingConnections for more info), but if I recall correctly Twilio iOS SDK doesn’t support instant messages in that API like we do, hence they never needed to address this need.

That said I think the best way to go about it is to introduce this API: - (void)device:(RCDevice)device didReceiveTransientError:(NSError)error;

Which will be triggered for RCDevice errors that don’t cause the RCDevice to go offline. For now this is only needed for messages.

I know the name is a bit weird but I had to chose between: - Making it too generic like didReceiveError, which would make someone wonder why there are 2 separate APIs for RCDevice errors. - Another idea would be to make it only message related like didReceiveMessageError, which might be a bad choice since in the future we might find that we have other errors that don’t trigger the Device to go offline and which are not text message related - And finally something to show that the error isn’t that ‘hard’ to trigger the Device to go offline like didReceiveTransientError

Any other ideas are welcome!

Updated 26/06/2017 17:30

Sporadically media fails in a call to Restcomm

RestComm/restcomm-ios-sdk

It happens during call setup and not media is heard afterwards (scenario is a call to +1235)

Something to do with AVAudioSession it seems:

2016-02-17 17:33:49.665 restcomm-olympus[897:272435] connectionDidConnect
2016-02-17 17:33:54.957 restcomm-olympus[897:272597] 17:33:54.955 ERROR:     [Thread 0x0x17fe8530] 79: AudioSessionSetProperty posting message to kill mediaserverd (37)
2016-02-17 17:33:55.749 restcomm-olympus[897:272597] 17:33:55.749 ERROR:     [Thread 0x0x17fe8530] AVAudioSessionUtilities.h:111: GetProperty: AudioSessionGetProperty ('aiav') failed with error: '!siz'
2016-02-17 17:33:55.750 restcomm-olympus[897:272597] 17:33:55.750 ERROR:     [Thread 0x0x17fe8530] AVAudioSessionUtilities.h:124: GetProperty_DefaultToZero: AudioSessionGetProperty ('aiav') failed with error: '!siz'
2016-02-17 17:33:57.645 restcomm-olympus[897:272435] onVideoError: NSConcreteNotification 0x17e1c970 {name = AVCaptureSessionRuntimeErrorNotification; object = <AVCaptureSession: 0x17ed2bb0 [AVCaptureSessionPreset640x480]>
    <AVCaptureDeviceInput: 0x1c01e8a0 [Front Camera]> -> <AVCaptureVideoDataOutput: 0x17ed3820>; userInfo = {
    AVCaptureSessionErrorKey = "Error Domain=AVFoundationErrorDomain Code=-11819 \"Cannot Complete Action\" UserInfo=0x17ee73a0 {NSLocalizedRecoverySuggestion=Try again later., NSLocalizedDescription=Cannot Complete Action}";
}}

Full logs at:

https://gist.github.com/atsakiridis/8d18494479879d309b31

Updated 23/06/2017 13:51 2 Comments

When changing from registrar-less to non registrar-less sporadically registration fails

RestComm/restcomm-ios-sdk

Scenario: - Launch iOS Olympus in registrar-less mode (with default values for settings) - Navigate to Settings and set a valid user for cloud.restcomm.com and also setup the domain to cloud.restcomm.com - Hit back to go to main screen

At that point the registration sometimes fails and you have to go back to settings and back again for it to take effect properly. Here are the logs in the first failure:

Jan 19 13:56:19  restcomm-olympus[63625] <Notice>: (RCDevice.m:95) [RCDevice initWithParams]
Jan 19 13:56:19  restcomm-olympus[63625] <Notice>: (SipManager.mm:289) [SipManager initWithDelegate]
Jan 19 13:56:19  restcomm-olympus[63625] <Notice>: (SipManager.mm:364) [SipManager eventLoop]
Jan 19 13:56:19  restcomm-olympus[63625] <Notice>: (ssc_sip.mm:231) Creating SIP stack -binding to: sip:192.168.2.3:*;transport=tcp
Jan 19 13:56:19  restcomm-olympus[63625] <Notice>: (RCDevice.m:375) [RCDevice checkNetworkStatus] Reachability update: 1
Jan 19 13:56:19  restcomm-olympus[63625] <Notice>: (RCDevice.m:321) [RCDevice sipManagerDidInitializedSignalling]
Jan 19 13:57:14  restcomm-olympus[63625] <Notice>: (RCDevice.m:282) [RCDevice updateParams]
Jan 19 13:57:14  restcomm-olympus[63625] <Debug>: (ssc_sip.mm:1960) UA: un-REGISTER sip:ios-sdk@cloud.restcomm.com
Jan 19 13:57:14  restcomm-olympus[63625] <Debug>: (ssc_sip.mm:1880) REGISTER sip:ios-sdk@cloud.restcomm.com - registering address to network
Jan 19 13:57:16  restcomm-olympus[63625] <Debug>: (ssc_sip.mm:557) Unhandled event 'nua_i_outbound' (8): 101 NAT detected
Jan 19 13:57:16  restcomm-olympus[63625] <Debug>: (ssc_sip.mm:1897) REGISTER: 407 Proxy Authentication required
Jan 19 13:57:16  restcomm-olympus[63625] <Debug>: (ssc_sip.mm:635) Authenticating 'REGISTER' with 'Digest:"cloud.restcomm.com":ios-sdk:1234'
Jan 19 13:57:17  restcomm-olympus[63625] <Debug>: (ssc_sip.mm:1897) REGISTER: 407 Proxy Authentication required
Jan 19 13:57:17  restcomm-olympus[63625] <Debug>: (ssc_sip.mm:635) Authenticating 'REGISTER' with 'Digest:"cloud.restcomm.com":ios-sdk:1234'
Jan 19 13:57:17  restcomm-olympus[63625] <Debug>: (ssc_sip.mm:1897) REGISTER: 904 Operation has no matching challenge 
Jan 19 13:57:17  restcomm-olympus[63625] <Error>: (ssc_sip.mm:1906) REGISTER failed: 904 Operation has no matching challenge 
Jan 19 13:57:17  restcomm-olympus[63625] <Notice>: (ssc_sip.mm:1909) Got failed REGISTER response but silencing it since another registration has been successfully handled afterwards
Jan 19 13:57:17  restcomm-olympus[63625] <Notice>: (RCDevice.m:329) [RCDevice didSignallingError: {
      "NSLocalizedDescription" : "Operation has no matching challenge "
    }]
Jan 19 13:57:17  restcomm-olympus[63625] <Debug>: (ssc_sip.mm:1979) un-REGISTER: 407 Proxy Authentication required
Jan 19 13:57:17  restcomm-olympus[63625] <Debug>: (ssc_sip.mm:635) Authenticating 'REGISTER' with 'Digest:"cloud.restcomm.com":ios-sdk:1234'
Jan 19 13:57:17  restcomm-olympus[63625] <Debug>: (ssc_sip.mm:1979) un-REGISTER: 407 Proxy Authentication required
Jan 19 13:57:17  restcomm-olympus[63625] <Debug>: (ssc_sip.mm:635) Authenticating 'REGISTER' with 'Digest:"cloud.restcomm.com":ios-sdk:1234'
Jan 19 13:57:17  restcomm-olympus[63625] <Debug>: (ssc_sip.mm:1979) un-REGISTER: 904 Operation has no matching challenge 

This is probably related to #331, that will refactor error handing

Updated 23/06/2017 13:51 1 Comments

Error: unable to open leveldb datastore

ipfs/go-ipfs

In the morning my computer “froze” (became unresponsive) and I had to reset it; the reason behind it was, I suppose, not related to go-ipfs.

After the reboot I tried to run IPFS and got the following message:

Error: failed to create lock file (lock path here) open (lock path here) : The file exists.

This is natural after an unexpected reboot.

I have deleted the lock file using the given path and then attempted to restart IPFS. I’ve got the following message:

Error: unable to open leveldb datastore

There’s literally nothing I could do about it, and thus I eventually resorted to ipfs init -f and lost my pinned media, an associated IPNS name, etc.

Consider choosing some other key+value storage that survives an unexpected reboot better than LevelDB.

For example, SQLite implements serializable transactions that are atomic, consistent, isolated, and durable, even if the transaction is interrupted by a program crash, an operating system crash, or a power failure to the computer. SQLite also has an extension useful for managing JSON content stored in an SQLite database.

Updated 06/08/2017 23:30 16 Comments

Missing "read" dependency

marklogic/node-client-api

Examples/setup.js requires the npm “read” package, but this isn’t included in package.json.

$ node examples/setup.js 
module.js:338
    throw err;
          ^
Error: Cannot find module 'read'
    at Function.Module._resolveFilename (module.js:336:15)
    at Function.Module._load (module.js:278:25)
    at Module.require (module.js:365:17)
    at require (module.js:384:17)
    at Object.<anonymous> (/Users/dcassel/tmp/myproject/node_modules/marklogic/etc/test-setup-prompt.js:16:14)
    at Module._compile (module.js:460:26)
    at Object.Module._extensions..js (module.js:478:10)
    at Module.load (module.js:355:32)
    at Function.Module._load (module.js:310:12)
    at Module.require (module.js:365:17)
    at require (module.js:384:17)
    at Object.<anonymous> (/Users/dcassel/tmp/myproject/node_modules/marklogic/examples/setup.js:21:22)
    at Module._compile (module.js:460:26)
    at Object.Module._extensions..js (module.js:478:10)
    at Module.load (module.js:355:32)
    at Function.Module._load (module.js:310:12)
Updated 24/07/2017 20:24 5 Comments

an error without an error callback emits a spurious Bluebird error

marklogic/node-client-api

The spurious error is:

node_modules/bluebird/js/main/async.js:36 fn = function () { throw arg; };

The root error is still reported. In addition, the application should provide an error callback for better error reporting.

A quick search shows that the request module found a configure solution:

https://github.com/request/request-promise/commit/bc6080e501a406eb03ec779dd50458cde1bce7aa

But more investigation is needed.

Updated 24/07/2017 20:25

unable to work with forge 1.8 server in proxy mode

benbaptist/minecraft-wrapper

I’ve tried to use the latest stable forge server 1.8 – see log below for version, but I get an error on client saying the “This server requires FML/Forge to be installed…” BTW, everything works fine if I turn off proxy mode.

[23:40:15] [main/INFO] [FML]: Forge Mod Loader version 8.0.37.1334 for Minecraft 1.8 loading [23:40:15] [main/INFO] [FML]: Java is Java HotSpot™ 64-Bit Server VM, version 1.8.0_40, running on Mac OS X:x86_64:10.10.3, installed at /Library/Java/JavaVirtualMachines/jdk1.8.0_40.jdk/Contents/Home/jre [23:40:16] [main/WARN] [FML]: The coremod codechicken.core.launch.CodeChickenCorePlugin does not have a MCVersion annotation, it may cause issues with this version of Minecraft [23:40:16] [main/INFO] [DepLoader]: Extracting file ./mods/CodeChickenCore-1.8-1.0.5.34-universal.jar!lib/CodeChickenLib-1.8-1.1.2.115-universal.jar [23:40:16] [main/INFO] [DepLoader]: Extraction complete [23:40:16] [main/WARN] [FML]: The coremod codechicken.lib.asm.CCLCorePlugin does not have a MCVersion annotation, it may cause issues with this version of Minecraft [23:40:16] [main/WARN] [FML]: The coremod codechicken.nei.asm.NEICorePlugin does not have a MCVersion annotation, it may cause issues with this version of Minecraft [23:40:16] [main/INFO] [LaunchWrapper]: Loading tweak class name net.minecraftforge.fml.common.launcher.FMLInjectionAndSortingTweaker [23:40:16] [main/INFO] [LaunchWrapper]: Loading tweak class name net.minecraftforge.fml.common.launcher.FMLDeobfTweaker [23:40:16] [main/INFO] [LaunchWrapper]: Calling tweak class net.minecraftforge.fml.common.launcher.FMLInjectionAndSortingTweaker [23:40:16] [main/INFO] [LaunchWrapper]: Calling tweak class net.minecraftforge.fml.common.launcher.FMLInjectionAndSortingTweaker [23:40:16] [main/INFO] [LaunchWrapper]: Calling tweak class net.minecraftforge.fml.relauncher.CoreModManager$FMLPluginWrapper [23:40:17] [main/INFO] [FML]: Found valid fingerprint for Minecraft Forge. Certificate fingerprint e3c3d50c7c986df74c645c0ac54639741c90a557 [23:40:18] [main/INFO] [LaunchWrapper]: Calling tweak class net.minecraftforge.fml.relauncher.CoreModManager$FMLPluginWrapper [23:40:18] [main/INFO] [LaunchWrapper]: Calling tweak class net.minecraftforge.fml.relauncher.CoreModManager$FMLPluginWrapper [23:40:18] [main/INFO] [LaunchWrapper]: Calling tweak class net.minecraftforge.fml.relauncher.CoreModManager$FMLPluginWrapper [23:40:18] [main/INFO] [LaunchWrapper]: Calling tweak class net.minecraftforge.fml.relauncher.CoreModManager$FMLPluginWrapper [23:40:18] [main/INFO] [LaunchWrapper]: Calling tweak class net.minecraftforge.fml.relauncher.CoreModManager$FMLPluginWrapper [23:40:18] [main/INFO] [LaunchWrapper]: Calling tweak class net.minecraftforge.fml.common.launcher.FMLDeobfTweaker [23:40:18] [main/INFO] [LaunchWrapper]: Loading tweak class name net.minecraftforge.fml.common.launcher.TerminalTweaker [23:40:18] [main/INFO] [LaunchWrapper]: Calling tweak class net.minecraftforge.fml.common.launcher.TerminalTweaker [23:40:18] [main/INFO] [LaunchWrapper]: Launching wrapped minecraft {net.minecraft.server.MinecraftServer} [23:40:21] [Server thread/INFO]: Starting minecraft server version 1.8 [23:40:21] [Server thread/INFO] [MinecraftForge]: Attempting early MinecraftForge initialization [23:40:21] [Server thread/INFO] [FML]: MinecraftForge v11.14.1.1334 Initialized [23:40:21] [Server thread/INFO] [FML]: Replaced 204 ore recipies

Updated 09/07/2017 19:35 7 Comments

Add import hook for PIL.ImageTk.

pyinstaller/pyinstaller

Original date: 2014/03/27 Original reporter: *matttodaro AND gmail DOT COOM *

I’m getting an error when running my script in the console. I’ve traced it to the from PIL import ImageTk. Not sure how to fix this.

Traceback (most recent call last):
  File "<string>", line 26, in <module>
  File "/Library/Python/2.7/site-packages/PyInstaller/loader/pyi_importers.py", line 270, in load_module
    exec(bytecode, module.__dict__)
  File "/Users/matttodaro/Desktop/Board_Pitch/build/board_pitch_v3_copy copy/out00-PYZ.pyz/PIL.PngImagePlugin", line 40, in <module>
  File "/Library/Python/2.7/site-packages/PyInstaller/loader/pyi_importers.py", line 270, in load_module
    exec(bytecode, module.__dict__)
  File "/Users/matttodaro/Desktop/Board_Pitch/build/board_pitch_v3_copy copy/out00-PYZ.pyz/PIL.Image", line 44, in <module>
  File "/Library/Python/2.7/site-packages/PyInstaller/loader/pyi_importers.py", line 270, in load_module
    exec(bytecode, module.__dict__)
  File "/Users/matttodaro/Desktop/Board_Pitch/build/board_pitch_v3_copy copy/out00-PYZ.pyz/FixTk", line 74, in <module>
OSError: [Errno 20] Not a directory: '/var/folders/m2/xptcb98s4lj060skxt5j2x880000gn/T/_MEIyGdK4R/tcl'
Updated 10/08/2017 16:08 9 Comments

PyInstaller fails to load Werkzeug modules

pyinstaller/pyinstaller

Original date: 2012/09/04 Original reporter: torsten DOT landschoff AND dynamore DOT de

I am trying to deploy an application that uses Werkzeug but I am unable to get PyInstaller to include the werkzeug.* modules. Look at this example:

#!py
from werkzeug.exceptions import InternalServerError
print "Hello World!"

Obviously this works fine when called directly from Python. Using pyinstaller on test.py without any options gives an executable which acts like this:

torsten@sharokan:~/pyinstaller-test$ ~/workspace/pyinstaller-2.0/pyinstaller.py test.py
torsten@sharokan:~/pyinstaller-test$ ./dist/test/test 
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/home/torsten/workspace/pyinstaller-2.0/PyInstaller/loader/iu.py", line 409, in importHook
    raise ImportError("No module named %s" % fqname)
ImportError: No module named werkzeug.exceptions

I created a hook to make PyInstaller include the werkzeug submodules and placed it into the directory hookspath. I enabled that hook by modifying the generated test.spec to include

#!py
a = Analysis(['test.py'],
             pathex=['/home/torsten/pyinstaller-test'],
             hiddenimports=[],
             hookspath="hookspath")

But I noticed that this does not make a difference for the build result: The file logdict2.7.3.final.0-2.log is unchanged. In fact, the original version also contains the werkzeug.exceptions module (verified with ArchiveViewer.py).

Why this module can not be loaded is beyond me. I enabled the debug output of iu.py and will attach that file. Perhaps somebody with more insight into PyInstaller internals can go and fix the loader.

The only workaround I found was to replace the init.py in the werkzeug package to remove the lazy load feature by commenting out the last lines so that new_module is not written into sys.modules[“werkzeug”].

Updated 30/05/2017 06:28 6 Comments

During a MERGE build, a manifest file is created for the spec file, and is not used

pyinstaller/pyinstaller

Original date: 2012/05/31 Original reporter: *ddwiggins AND advpubtech DOT COOM *

(I’m classifying this as an enhancement, because it doesn’t affect the final result of building.)

I’m using MERGE to built a suite of applications. The spec file is called FalconApps.spec; none of the executables have the name FalconApps.

During the build, the manifest file FalconApps.exe.manifest gets created in the build folder, and rewritten for each executable. On subsequent builds, this causes the .pkg file to be rebuilt because the manifest file has changed, and that in turn causes the EXE.toc file to be rebuilt. To prevent this, I’ve created the following patch in build.py:

@@ -967,8 +967,9 @@ class EXE(Target):
                 self.toc.extend(arg)
         if is_win:
             filename = os.path.join(BUILDPATH, specnm + ".exe.manifest")
-            self.manifest = winmanifest.create_manifest(filename, self.manifest,
-                self.console)
+            if not os.path.exists(filename):
+                self.manifest = winmanifest.create_manifest(filename, self.manifest,
+                                                            self.console)
             self.toc.append((os.path.basename(self.name) + ".manifest", filename,
                 'BINARY'))
         self.pkg = PKG(self.toc, cdict=kws.get('cdict', None),

I’m not necessarily proposing this as the best solution, although it does work for me.

There’s another question here, though: should this file be created at all? Each executable gets its own manifest file, placed next to the executable, and as far as I can tell, nothing is done with FalconApps.exe.manifest.

Updated 10/08/2017 16:09

Fork me on GitHub