wissel.net

Usability - Productivity - Business - The web - Singapore & Twins

Yes No Maybe Boolean deserialization with Jackson


The Robustness principle demands: be lenient in what you accept and strict in what you emit. I was facing this challenge when deserializing boolean values.

What is true

Glancing at data, we can spot, mostly easily what looks trueish:

  • true
  • "True"
  • "Yes"
  • 1
  • "Si"
  • "Ja"
  • "Active"
  • "isActive"
  • "enabled"
  • "on"

The last three options aren't as clear cut, they depend on your use case. Using a simple class, lets try to deserialize from JSON to an instance of a Java class instance using Jackson.

Java doesn't have native support for JSON, so we need to rely on libraries like Jackson, Google GSON (or any other listed on the JSON page). I choose Jackson, since it is the library underpinning the JsonObject of the Eclipse Vert.x Framework I'm fond of. Over at Baeldung you will find more generic Jackson tutorials.

Let's look at a simple Java class (Yes, Java14 will make it less verbose), that sports fromJson() and toJson() as well as convenient overwrite of equals() and toString()

package com.notessensei.blogsamples;

import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
import com.fasterxml.jackson.annotation.JsonInclude;
import io.vertx.core.json.JsonObject;

@JsonInclude(JsonInclude.Include.NON_NULL)
@JsonIgnoreProperties(ignoreUnknown = true)
public class Component {

  public static Component fromJson(final JsonObject source) {
    return source.mapTo(Component.class);
  }

  private String name;
  private boolean active = false;

  public Component() {
    // Default empty constructor required
  }

  public String getName() {
    return name;
  }

  public void setName(String name) {
    this.name = name;
  }

  public boolean getActive() {
    return active;
  }

  public void setActive(boolean isActive) {
    this.active = isActive;
  }

  public JsonObject toJson() {
    return JsonObject.mapFrom(this);
  }

  @Override
  public boolean equals(Object obj) {
    if (obj instanceof Component) {
      return this.toString().equals(obj.toString());
    }
    return super.equals(obj);
  }

  @Override
  public String toString() {
    return this.toJson().encode();
  }

}

Trying to instantiate a class instance with the following JSON will work:

{
  "name": "Heater",
  "active": false
}
{
  "name": "Aircon"
}
{
  "name": "Fridge",
  "active": true,
  "PowerConsumption": {
    "unit": "kw",
    "measure": 7
  }
}

However it will fail with those:

{
  "name": "System1",
  "active": "on"
}
{
  "name": "System2",
  "active": "yes"
}

You get the charming error Cannot deserialize value of type boolean from String "yes": only "true"/"True"/"TRUE" or "false"/"False"/"FALSE" recognized`. Interestingly numbers work.

On a side note: Jackson uses the presence of getters/setters to decide (de)serialization and needs getActive and setActive or isActive. When you name your variable isActive Eclipse would generate setActive and isActive instead of getIsActive / isIsActive and setIsActive. So simply avoid the is... prefix for internal variables.


Read more

Posted by on 07 May 2022 | Comments (0) | categories: Java

The Quest for a software documentation system


Software documentation is a thankless business and never complete. Picking the right system can make or break your documentation success

Contenders

We have a number of options commonly used, each with strengh and weaknesses.

  • DITA: The OASIS Open Darwin Information Typing Architecture. Extremly powerful, especially the concept of single source definition: You define an item once and just reference it. XML based, suitable for complex documentation, but with a super steep learning curve, effectively prohibit community contributions
  • jekyll: Markdown driven template engine, best known for driving GitHub Pages. With the Just-the-docs template it makes documentation creation a simple task in your repository's /doc directory. Running site generation and hosting is build into github, so no GitHub action or other CI/CD pipeline needed. Lacks good tooling for multi-version documentation
  • Maven sites: a good option when Java is your language. Tightly coupled to the build process it produces full reporting and JavaDoc. Can be a pain to setup
  • Read the docs: Great destination for OpenSource documentation or your corporate documentation if the build server can reach it. Uses the MKDocs rendering engine
  • and many, did I say many more

I found the tools quite impressive and somehow wanting at the same time. So taking a step back, it is worth to look at requirements


Read more

Posted by on 09 March 2022 | Comments (0) | categories: Software

Maven JNA macOS and LD_LIBRARY_PATH


When running Java applications on a *nix style of operating system that need to load native libraries, you will need to set the LD_LIBRARY_PATH environment variable (or something similar). That's not an issue on Linux.

macOS: I won't let you, it's for your own good

On macOS the System Integrity Protection (SIP) prevents these variables to be set in your shell (bash, zsh). It works inside Eclipse, when you define environment parameters, but not in any shell script. Unfortunately Maven's command line mvn is a shell script.

The Notes challenge

Since the Notes client is a cross-platform product, the library locations aren't where a macOS program would look for:

  • The application directory. That's where the Java runtime is at home, not the notes executable
  • In a library location, here looking for notes instead of libnotes.dylib
  • /Users/[YOURNAME]/Library/Frameworks/notes.framework/
  • /Library/Frameworks/notes.framework/
  • /System/Library/Frameworks/notes.framework/

You could try to symlink the first library: ln -s /Applications/HCL\ Notes.app/Contents/MacOS/libnotes.dylib ~/Library/Frameworks/notes.framework/notes (after creating the rewuired directories) to run into the next challenge.


Read more

Posted by on 12 January 2022 | Comments (0) | categories: Domino Java

Async Java with vert.x


I wrote about more modern Java syntax and streams before.
There is more to it. Non Blocking I/O and Event Loops allow for
better performance. It's not a magic bullet, some readjustment is required

Adjusting methods, exceptions and return values

Initially it might look daunting, but the adjustments are not too big. Let's look at some examples. A classic Java method looks like this:

String someResult throws DidnWorkException {
    // Working code goes here
    if (someCondition) {
        throw new DidnWorkException();
    }
    return "It worked";
}

Its asynchronous counter-part looks like this:

Future<String> someResult() {
    return Future.future(promise -> {
        // Working code goes here
        if (someCondition) {
            promise.fail("It didn't work"); // Could use a Throwable too
        } else {
            promise.complete("It worked");
        }
    });
}

Read more

Posted by on 06 January 2022 | Comments (0) | categories: Domino Singapore

Deploying your frontend as webJar


In an API driven world back-end and front-end are clearly separated and might live on different servers alltogether. However for smaller applications serving static files happens from the same place as your backend lives

So many choices

The web server that proxies your application server could have a rule for static files, your firewall could so that, you use a static directory on your application server or pack, that's the story here, your front-end into a jar. I'm not discussing the merits of the different approaches here, that's a story for another time, but describe the workflow and tools for the JAR approach.

vert.x static routes

In Vertx a static route can be declared with a few lines of code:

Router router = Router.router(vertx);
router.route("/ui/*")
      .handler(StaticHandler.create("uitarget"));

Vertx will then look for the folder uitarget in its current working directory or on the classpath. So you will need to put your jar on the classpath

The swagger-ui example

There are lots of prepackaged UI jars available and easy to integrate into vert.x. For example the Swagger UI. Define a dependency in your pom.xml and a one liner to access the code:

<dependency>
    <groupId>org.webjars</groupId>
    <artifactId>swagger-ui</artifactId>
    <version>4.1.3</version>
</dependency>
Router router = Router.router(vertx);
router.route("/assets/lib/*").handler(StaticHandler.create("META-INF/resources/webjars"));

Packing your own front-end

Most modern build front-ends owe their executable form to an npm build command. If you are not sure check the documentation for React, Angular, Lightning, Vue, Ionic or whatever framework you fancy.

There are two plugins for maven that can process front-end work:

  • The Frontend Maven Plugin: Specialized module that handles download of NodeJS and running your NodeJS based build tools. Great when you don't have NodeJS installed anyway
  • The Exec Maven Plugin: Generic plugin to run stuff. Doesn't download NodeJS for you. More work to setup (that's what I picked)

The steps you will need to perform, actually not you, but your mvn package run:

  • run npm install
  • run npm build
  • move files into the target directory structure
  • build the Jar

All of this can be wrapped into your pom.xml. I usually add the front-end as a module to the whole project, so a build is always complete


Read more

Posted by on 27 December 2021 | Comments (0) | categories: Java JavaScript vert.x

Refresh local git repositories


I keep all my software that is under version control below a few directories only. E.g. OpenSource projects I cloned to learn from them live below ~/OpenSource. Keeping up with updates requires to pull them all.

Pulling the main branch

My little helper does:

  • change into each first level sub directory
  • check if it is under version control
  • capture the current branch
  • switch to main or master branch, depending on which one is there
  • capture the name of the tracked remote
  • fetch all remotes
  • pull the tracked remote
  • switch back to the branch it was in

The script does not check if the current branch is dirty (preventing checkout) or pushing back changes. Enjoy

#!/bin/bash
# Pull all repos below the current working directory

do_the_sync() {
  for f in *; do
      if [ -d $f -a ! -h $f ]; then
         cd -- "$f";
         if [ -d ".git" ]; then
            curBranch=$(git branch --show-current)
            mainBranch=nn
            echo "Working on $f";
            if [ "`git branch --list main`" ]; then
              mainBranch=main
            else
              mainBranch=master
            fi
            remoteBranch=$(git rev-parse --abbrev-ref ${mainBranch}@{upstream})
            IFS='/' read -r remoteSrv string <<< "$remoteBranch"
            echo "working on $mainBranch tracking $remoteSrv"
            git fetch --all
            git pull $remoteSrv
            git checkout $curBranch
         fi
         cd ..
      fi;
  done;
};

do_the_sync
echo "DONE!"

As usual YMMV


Posted by on 23 December 2021 | Comments (0) | categories: GitHub Software

Spotless code with a git hook


When developing software in a team, a source of constant annoyment is code formatting. Each IDE has slightly different ideas about it, not even getting into the tabs vs. spaces debate. Especially annoying in Java land is the import sort order

Automation to the rescue

I switch between editors (if you need to know: Eclipse, Visual Studio Code, OxygenXML, IntelliJ, Sublime, Geany, nano or vi (ESC :!wq)) frequently, so an editor specific solution isn't an option.

Spotless to the rescue. It's a neat project using Maven or Gradle to format pretty (pun inteded) much all code types I use. The documentation states:

Spotless can format <antlr | c | c# | c++ | css | flow | graphql | groovy | html | java | javascript | json | jsx | kotlin | less | license headers | markdown | objective-c | protobuf | python | scala | scss | sql | typeScript | vue | yaml | anything> using <gradle | maven | anything>.

Setup

I opted for the eclipse defined Java formatting, using almost the Google formatting rules with the notable exception not merging line breaks back.

There are 3 steps involved for the Maven setup:

  • Obtaining the formatting files, outlined here. Just make sure you are happy with the format first
  • Add the maven plugin (see below)
  • Add a git hook (see below)

pom.xml

This is what I added to my pom.xml. By default spotless would run check only, so I added apply to enforce the formatting

<properties>
   <spotless.version>2.4.1</spotless.version>
</properties>

<build>
    <plugins>
        <plugin>
            <groupId>com.diffplug.spotless</groupId>
            <artifactId>spotless-maven-plugin</artifactId>
            <version>${spotless.version}</version>
            <executions>
               <execution>
                 <goals>
                   <goal>apply</goal>
                 </goals>
               </execution>
            </executions>
            <configuration>
                <formats>
                    <format>
                        <!-- Markdown, JSON and gitignore -->
                        <includes>
                            <include>*.md</include>
                            <include>*.json</include>
                            <include>.gitignore</include>
                        </includes>
                        <trimTrailingWhitespace />
                        <endWithNewline />
                        <indent>
                            <spaces>true</spaces>
                            <spacesPerTab>2</spacesPerTab>
                        </indent>
                    </format>
                </formats>
                <!-- ECLIPSE Java format -->
                <java>
                    <toggleOffOn />
                    <importOrder>
                        <file>${maven.multiModuleProjectDirectory}/spotless.importorder</file>
                    </importOrder>
                    <removeUnusedImports />
                    <eclipse>
                        <file>${maven.multiModuleProjectDirectory}/eclipse-java-keep-style.xml</file>
                    </eclipse>
                </java>
            </configuration>
        </plugin>
    </plugins>
</build>

A few remarks:

  • I run apply rather than check
  • the directory variable ${maven.multiModuleProjectDirectory} is needed, so sub projects work
  • you want to extend the configuration to include JS/TS eventually

.git/hooks/pre-commit

Create or edit your [projectroot]/.git/hooks/pre-commit file:

#!/bin/bash
# Run formatting on pre-commit
files=`git status --porcelain | cut -c 4-`
fulllist=''
for f in $files; do
    fulllist+=(.*)$(basename $f)$'\n'
done;
list=`echo "${fulllist}" | paste -s -d, /dev/stdin`
echo Working on $list
# Activate Java 11
export JAVA_HOME=`/usr/libexec/java_home -v 11.0`
/usr/local/bin/mvn spotless:apply -Dspotless.check.skip=false -DspotlessFiles=$list
  • You might not need the line with Java
  • swap apply for check when you just want to check

As usual YMMV


Posted by on 10 December 2021 | Comments (0) | categories: GitHub Java Software

Factory based dependency injection


No man is an island and no code you write lives without dependencies (even your low-level assembly code depends on the processor's microcode). Testing (with) dependencies can be [insert expletive]

Dependency injection to the rescue

The general approach to make dependent code testable is Dependency injection. Instead of calling out and create an instance of the dependency, the dependency is hand over as parameter. This could be in a constructor, a property setter or as method parameter.

A key requirement for successful dependency injection: the injected object gets injected as an Interface rather than a concrete class. So do your homework and build your apps around interfaces.

An example to illustrate how not to do, and how to change:

public Optional<Customer> findCustomer(final String id) {
 // Some processing here, omitted for clarity

 // actual find
 final CustomerDbFind find = CustomerDb.getFinder();
 return Optional.ofNullable(find.customerById(id));

}

When you try to test this function, you depend on the static method of the CustomerDb which is a pain to mock out. So one consideration could be to hand the CustomerDb as dependency. But this would violate "provide interface, not class". The conclusion, presuming CustomerDbFind is an interface will be:

public Optional<Customer> findCustomer(final CustomerDbFind find, final String id) {
 // Some processing here, omitted for clarity

 // actual find

 return Optional.ofNullable(find.customerById(id));

}

This now allows to construct the dependency outside the method to test by implementing the interface or using a Mock library

Not so fast


Read more

Posted by on 09 December 2021 | Comments (0) | categories: Domino Java

Java Streams filters with side effects


Once you get used to stream programming and the pattern of create, select, manipulate and collect your code will never look the same

Putting side effects to good (?) use

The pure teachings tell us, filters should select objects for processing and not have any side effects or do processing on their own. But ignoring the teachings could produce clean code (I probably will roast in debug hell for this). Let's look at an example:

final Collection<MyNotification> notifications = getNotifications();
final Iterator<MyNotification> iter = notifications.iterator();

while(iter.hasNext()) {
  MyNotification n = iter.next();

  if (n.priority == Priority.high) {
    sendHighPriority(n);
  } else if (n.groupNotification) {
    sendGroupNotification(n);
  } else if (n.special && !n.delay > 30) {
    sendSpecial(n);
  } else if (!n.special) {
    sendStandard(n);
  } else {
    reportWrongNotification(n);
  }
}

This gets messy very fast and all selection logic is confined to the if conditions in one function (which initially looks like a good idea). How about rewriting the code Stream style? It will be more boiler plate, but better segregation:

final Stream<MyNotification> notifications = getNotifications();

notifications
  .filter(this::highPriority)
  .filter(this::groupSend)
  .filter(this::specialNoDelay)
  .filter(this::standard)
  .forEach(this::reportWrongNotification);

The filter functions would look like this:

boolean highPriority(final MyNotification n) {
  if (n.priority == Priority.high) {
    sendHighPriority(n);
    return false; // No further processing required
  }
  return true; // Furhter processing required
}

boolean groupSend(final MyNotification n) {
  if (n.groupNotification) {
    sendGroupNotification(n);
    return false; // No further processing required
  }
  return true; // Furhter processing required
}

You get the idea. With proper JavaDoc method headers, this code looks more maintainable.
We can push this a little further (as explored on Stackoverflow). Imagin the number of process steps might vary and you don't want to update that code for every variation. You could do something like this:

final Stream<MyNotification> notifications = getNotifications();
final Stream<Predicate<MyNotifications>> filters = getFilters();

notifications
  .filter(filters.reduce(f -> true, Predicate::and))
  .forEach(this::reportWrongNotification);

As usual YMMV


Posted by on 22 October 2021 | Comments (1) | categories: Java

Streaming CouchDB data


I'm a confessing fan of CouchDB, stream programming and the official CouchDB NodeJS library. Nano supports returning data as NodeJS Stream, so you can pipe it away. Most examples use file streams or process.stdout, while my goal was to process individual documents that are part of the stream

You can't walk into the same stream a second time

This old Buddhist saying holds true for NodeJS streams too. So any processing needs to happen in the chain of the stream. Let's start with the simple example of reading all documents from a couchDB:

const Nano = require("nano");
const nano = Nano(couchDBURL);
nano.listAsStream({ include_docs: true }).pipe(process.stdout);

This little snippet will read out all documents in your couchDB. You need to supply the couchDBURL value, e.g. http://localhost:5984/test. On a closer look, we see that the data returned arrives in continious buffers that don't match JSON document boundaries, so processing one document after the other needs extra work.

A blog entry in the StrongLoop blog provides the first clue what to do. To process CouchDB stream data we need both a Transform stream to chop incoming data into line by line and a writable stream for our results.

Our code, finally will look like this:

const Nano = require("nano");
const { Writable, Transform } = require("stream");

const streamOneDb = (couchDBURL, resultCallback) => {
  const nano = Nano(couchDBURL);
  nano
    .listAsStream({ include_docs: true })
    .on("error", (e) => console.error("error", e))
    .pipe(lineSplitter())
    .pipe(jsonMaker())
    .pipe(documentWriter(resultCallback));
};

Let's have a closer look at the new functions, the first two implement transform, the last one writable:

  • lineSplitter, as the name implies, cuts the buffer into separate lines for processing. As far as I could tell, CouchDB documents always returned on one line
  • jsonMaker, extracts the documents and discards the wrapper with document count that surrounds them
  • documentWriter, writing out the JSON object using a callback

Read more

Posted by on 16 October 2021 | Comments (1) | categories: CouchDB NodeJS