Usability - Productivity - Business - The web - Singapore & Twins


When running Java applications on a *nix style of operating system that need to load native libraries, you will need to set the LD_LIBRARY_PATH environment variable (or something similar). That's not an issue on Linux.

macOS: I won't let you, it's for your own good

On macOS the System Integrity Protection (SIP) prevents these variables to be set in your shell (bash, zsh). It works inside Eclipse, when you define environment parameters, but not in any shell script. Unfortunately Maven's command line mvn is a shell script.

The Notes challenge

Since the Notes client is a cross-platform product, the library locations aren't where a macOS program would look for:

  • The application directory. That's where the Java runtime is at home, not the notes executable
  • In a library location, here looking for notes instead of libnotes.dylib
  • /Users/[YOURNAME]/Library/Frameworks/notes.framework/
  • /Library/Frameworks/notes.framework/
  • /System/Library/Frameworks/notes.framework/

You could try to symlink the first library: ln -s /Applications/HCL\ Notes.app/Contents/MacOS/libnotes.dylib ~/Library/Frameworks/notes.framework/notes (after creating the rewuired directories) to run into the next challenge.

Read more

Posted by on 12 January 2022 | Comments (0) | categories: Domino Java

Async Java with vert.x

I wrote about more modern Java syntax and streams before.
There is more to it. Non Blocking I/O and Event Loops allow for
better performance. It's not a magic bullet, some readjustment is required

Adjusting methods, exceptions and return values

Initially it might look daunting, but the adjustments are not too big. Let's look at some examples. A classic Java method looks like this:

String someResult throws DidnWorkException {
    // Working code goes here
    if (someCondition) {
        throw new DidnWorkException();
    return "It worked";

Its asynchronous counter-part looks like this:

Future<String> someResult() {
    return Future.future(promise -> {
        // Working code goes here
        if (someCondition) {
            promise.fail("It didn't work"); // Could use a Throwable too
        } else {
            promise.complete("It worked");

Read more

Posted by on 06 January 2022 | Comments (0) | categories: Domino Singapore

Deploying your frontend as webJar

In an API driven world back-end and front-end are clearly separated and might live on different servers alltogether. However for smaller applications serving static files happens from the same place as your backend lives

So many choices

The web server that proxies your application server could have a rule for static files, your firewall could so that, you use a static directory on your application server or pack, that's the story here, your front-end into a jar. I'm not discussing the merits of the different approaches here, that's a story for another time, but describe the workflow and tools for the JAR approach.

vert.x static routes

In Vertx a static route can be declared with a few lines of code:

Router router = Router.router(vertx);

Vertx will then look for the folder uitarget in its current working directory or on the classpath. So you will need to put your jar on the classpath

The swagger-ui example

There are lots of prepackaged UI jars available and easy to integrate into vert.x. For example the Swagger UI. Define a dependency in your pom.xml and a one liner to access the code:

Router router = Router.router(vertx);

Packing your own front-end

Most modern build front-ends owe their executable form to an npm build command. If you are not sure check the documentation for React, Angular, Lightning, Vue, Ionic or whatever framework you fancy.

There are two plugins for maven that can process front-end work:

  • The Frontend Maven Plugin: Specialized module that handles download of NodeJS and running your NodeJS based build tools. Great when you don't have NodeJS installed anyway
  • The Exec Maven Plugin: Generic plugin to run stuff. Doesn't download NodeJS for you. More work to setup (that's what I picked)

The steps you will need to perform, actually not you, but your mvn package run:

  • run npm install
  • run npm build
  • move files into the target directory structure
  • build the Jar

All of this can be wrapped into your pom.xml. I usually add the front-end as a module to the whole project, so a build is always complete

Read more

Posted by on 27 December 2021 | Comments (0) | categories: Java JavaScript vert.x

Refresh local git repositories

I keep all my software that is under version control below a few directories only. E.g. OpenSource projects I cloned to learn from them live below ~/OpenSource. Keeping up with updates requires to pull them all.

Pulling the main branch

My little helper does:

  • change into each first level sub directory
  • check if it is under version control
  • capture the current branch
  • switch to main or master branch, depending on which one is there
  • capture the name of the tracked remote
  • fetch all remotes
  • pull the tracked remote
  • switch back to the branch it was in

The script does not check if the current branch is dirty (preventing checkout) or pushing back changes. Enjoy

# Pull all repos below the current working directory

do_the_sync() {
  for f in *; do
      if [ -d $f -a ! -h $f ]; then
         cd -- "$f";
         if [ -d ".git" ]; then
            curBranch=$(git branch --show-current)
            echo "Working on $f";
            if [ "`git branch --list main`" ]; then
            remoteBranch=$(git rev-parse --abbrev-ref ${mainBranch}@{upstream})
            IFS='/' read -r remoteSrv string <<< "$remoteBranch"
            echo "working on $mainBranch tracking $remoteSrv"
            git fetch --all
            git pull $remoteSrv
            git checkout $curBranch
         cd ..

echo "DONE!"

As usual YMMV

Posted by on 23 December 2021 | Comments (0) | categories: GitHub Software

Spotless code with a git hook

When developing software in a team, a source of constant annoyment is code formatting. Each IDE has slightly different ideas about it, not even getting into the tabs vs. spaces debate. Especially annoying in Java land is the import sort order

Automation to the rescue

I switch between editors (if you need to know: Eclipse, Visual Studio Code, OxygenXML, IntelliJ, Sublime, Geany, nano or vi (ESC :!wq)) frequently, so an editor specific solution isn't an option.

Spotless to the rescue. It's a neat project using Maven or Gradle to format pretty (pun inteded) much all code types I use. The documentation states:

Spotless can format <antlr | c | c# | c++ | css | flow | graphql | groovy | html | java | javascript | json | jsx | kotlin | less | license headers | markdown | objective-c | protobuf | python | scala | scss | sql | typeScript | vue | yaml | anything> using <gradle | maven | anything>.


I opted for the eclipse defined Java formatting, using almost the Google formatting rules with the notable exception not merging line breaks back.

There are 3 steps involved for the Maven setup:

  • Obtaining the formatting files, outlined here. Just make sure you are happy with the format first
  • Add the maven plugin (see below)
  • Add a git hook (see below)


This is what I added to my pom.xml. By default spotless would run check only, so I added apply to enforce the formatting


                        <!-- Markdown, JSON and gitignore -->
                        <trimTrailingWhitespace />
                        <endWithNewline />
                <!-- ECLIPSE Java format -->
                    <toggleOffOn />
                    <removeUnusedImports />

A few remarks:

  • I run apply rather than check
  • the directory variable ${maven.multiModuleProjectDirectory} is needed, so sub projects work
  • you want to extend the configuration to include JS/TS eventually


Create or edit your [projectroot]/.git/hooks/pre-commit file:

# Run formatting on pre-commit
files=`git status --porcelain | cut -c 4-`
for f in $files; do
    fulllist+=(.*)$(basename $f)$'\n'
list=`echo "${fulllist}" | paste -s -d, /dev/stdin`
echo Working on $list
# Activate Java 11
export JAVA_HOME=`/usr/libexec/java_home -v 11.0`
/usr/local/bin/mvn spotless:apply -Dspotless.check.skip=false -DspotlessFiles=$list
  • You might not need the line with Java
  • swap apply for check when you just want to check

As usual YMMV

Posted by on 10 December 2021 | Comments (0) | categories: GitHub Java Software

Factory based dependency injection

No man is an island and no code you write lives without dependencies (even your low-level assembly code depends on the processor's microcode). Testing (with) dependencies can be [insert expletive]

Dependency injection to the rescue

The general approach to make dependent code testable is Dependency injection. Instead of calling out and create an instance of the dependency, the dependency is hand over as parameter. This could be in a constructor, a property setter or as method parameter.

A key requirement for successful dependency injection: the injected object gets injected as an Interface rather than a concrete class. So do your homework and build your apps around interfaces.

An example to illustrate how not to do, and how to change:

public Optional<Customer> findCustomer(final String id) {
 // Some processing here, omitted for clarity

 // actual find
 final CustomerDbFind find = CustomerDb.getFinder();
 return Optional.ofNullable(find.customerById(id));


When you try to test this function, you depend on the static method of the CustomerDb which is a pain to mock out. So one consideration could be to hand the CustomerDb as dependency. But this would violate "provide interface, not class". The conclusion, presuming CustomerDbFind is an interface will be:

public Optional<Customer> findCustomer(final CustomerDbFind find, final String id) {
 // Some processing here, omitted for clarity

 // actual find

 return Optional.ofNullable(find.customerById(id));


This now allows to construct the dependency outside the method to test by implementing the interface or using a Mock library

Not so fast

Read more

Posted by on 09 December 2021 | Comments (0) | categories: Domino Java

Java Streams filters with side effects

Once you get used to stream programming and the pattern of create, select, manipulate and collect your code will never look the same

Putting side effects to good (?) use

The pure teachings tell us, filters should select objects for processing and not have any side effects or do processing on their own. But ignoring the teachings could produce clean code (I probably will roast in debug hell for this). Let's look at an example:

final Collection<MyNotification> notifications = getNotifications();
final Iterator<MyNotification> iter = notifications.iterator();

while(iter.hasNext()) {
  MyNotification n = iter.next();

  if (n.priority == Priority.high) {
  } else if (n.groupNotification) {
  } else if (n.special && !n.delay > 30) {
  } else if (!n.special) {
  } else {

This gets messy very fast and all selection logic is confined to the if conditions in one function (which initially looks like a good idea). How about rewriting the code Stream style? It will be more boiler plate, but better segregation:

final Stream<MyNotification> notifications = getNotifications();


The filter functions would look like this:

boolean highPriority(final MyNotification n) {
  if (n.priority == Priority.high) {
    return false; // No further processing required
  return true; // Furhter processing required

boolean groupSend(final MyNotification n) {
  if (n.groupNotification) {
    return false; // No further processing required
  return true; // Furhter processing required

You get the idea. With proper JavaDoc method headers, this code looks more maintainable.
We can push this a little further (as explored on Stackoverflow). Imagin the number of process steps might vary and you don't want to update that code for every variation. You could do something like this:

final Stream<MyNotification> notifications = getNotifications();
final Stream<Predicate<MyNotifications>> filters = getFilters();

  .filter(filters.reduce(f -> true, Predicate::and))

As usual YMMV

Posted by on 22 October 2021 | Comments (1) | categories: Java

Streaming CouchDB data

I'm a confessing fan of CouchDB, stream programming and the official CouchDB NodeJS library. Nano supports returning data as NodeJS Stream, so you can pipe it away. Most examples use file streams or process.stdout, while my goal was to process individual documents that are part of the stream

You can't walk into the same stream a second time

This old Buddhist saying holds true for NodeJS streams too. So any processing needs to happen in the chain of the stream. Let's start with the simple example of reading all documents from a couchDB:

const Nano = require("nano");
const nano = Nano(couchDBURL);
nano.listAsStream({ include_docs: true }).pipe(process.stdout);

This little snippet will read out all documents in your couchDB. You need to supply the couchDBURL value, e.g. http://localhost:5984/test. On a closer look, we see that the data returned arrives in continious buffers that don't match JSON document boundaries, so processing one document after the other needs extra work.

A blog entry in the StrongLoop blog provides the first clue what to do. To process CouchDB stream data we need both a Transform stream to chop incoming data into line by line and a writable stream for our results.

Our code, finally will look like this:

const Nano = require("nano");
const { Writable, Transform } = require("stream");

const streamOneDb = (couchDBURL, resultCallback) => {
  const nano = Nano(couchDBURL);
    .listAsStream({ include_docs: true })
    .on("error", (e) => console.error("error", e))

Let's have a closer look at the new functions, the first two implement transform, the last one writable:

  • lineSplitter, as the name implies, cuts the buffer into separate lines for processing. As far as I could tell, CouchDB documents always returned on one line
  • jsonMaker, extracts the documents and discards the wrapper with document count that surrounds them
  • documentWriter, writing out the JSON object using a callback

Read more

Posted by on 16 October 2021 | Comments (1) | categories: CouchDB NodeJS

Reading Resources from JAR Files

One interesting challenge I encountered is the need or ability to make an Java application extensible by providing additional classes and configuation. Ideally extension should happen by dropping a properly crafted JAR file into a specified location and restard the server. Along the line I learned about Java's classpath. This is what is to be shared here.

Act one: onto the classpath

When you start off with Java, you would expect, that you simply can set the classpath varible either using an environment variable or the java -cp parameter. Then you learn the hard way, that java -jar and java -cp are mutually exclusive. After a short flirt with fatJAR, you end up with a directory structure like this:

Directory Structure

The secreingredient to make this work is the manifest file inside the myApp.jar. It needs to be told to put all jar files in libs onto the classpath too. In maven, it looks like this:


Now, that all JARS are successfully availble on the classspath, we can try to retrieve them.

Read more

Posted by on 29 April 2021 | Comments (0) | categories: Java

Collecting Java Streams

I wrote about Java Streams before, sharing how they work for me and how, in conjunction with Java's functional interfaces, they enable us to write clean(er) code. I'd like to revisit my learnings, with some focus on the final step: what happens at the tail end of a stream operation

Four activities

There are four activities around Java Streams:

  • Create: There are numerous possibilities to create a stream. The most prevalent, I found, is Collection.stream() which returns a stream of anything in Java's collection framework: Collections, Lists, Sets etc.
    There are more possibilities provided by the Stream interface, the StreamBuilder interface, the StreamSupport utility class or Java NIO's Files (and probably some more)

  • Select: You can filter(), skip(), limit(), concat(), distinct() or sorted(). All those methods don't change individual stream members, but determine what elements will be processed further. Selection and manipulation can happen multiple times after each other

  • Manipulate: Replace each member of the stream with something else. That "something" can be the member itself with altered content. Methods that are fluent fit here nicely (like stream().map(customer -> customer.setStatus(newStatus)). We use map() and flatMap() for this step. While it is perfectly fine to use Lambda Expressions, consider moving the Lambda body into its own function, to improve reading and debugging

  • Collect: You can "collect" a stream once. After that it becomes inaccessible. The closest to classic loops here is the forEach() method, that allows you operate on the members as you are used to from the Java Collection framework.
    Next are the convenience methods: count(), findAny(), findFirst(), toArray() and finally reduce() and collect().
    A typical way to use collect() is in conjunction with the Collectors static class, that provides the most commonly needed methods like toSet(), toList(), joining() or groupingBy(). Check the JavaDoc, there are 37 methods at your disposal.
    However, sometimes, you might have different needs for your code, there custom collectors shine

Read more

Posted by on 01 January 2021 | Comments (0) | categories: Java