wissel.net

Usability - Productivity - Business - The web - Singapore & Twins

Reading Resources from JAR Files


One interesting challenge I encountered is the need or ability to make an Java application extensible by providing additional classes and configuation. Ideally extension should happen by dropping a properly crafted JAR file into a specified location and restard the server. Along the line I learned about Java's classpath. This is what is to be shared here.

Act one: onto the classpath

When you start off with Java, you would expect, that you simply can set the classpath varible either using an environment variable or the java -cp parameter. Then you learn the hard way, that java -jar and java -cp are mutually exclusive. After a short flirt with fatJAR, you end up with a directory structure like this:

Directory Structure

The secreingredient to make this work is the manifest file inside the myApp.jar. It needs to be told to put all jar files in libs onto the classpath too. In maven, it looks like this:

<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-jar-plugin</artifactId>
    <version>${maven.jar.plugin.version}</version>
    <configuration>
        <archive>
            <manifest>
                <mainClass>com.hcl.domino.keep.Launch</mainClass>
            </manifest>
            <manifestEntries>
                <Class-Path>.</Class-Path>
                <Class-Path>libs/*</Class-Path>
            </manifestEntries>
        </archive>
    </configuration>
</plugin>

Now, that all JARS are successfully availble on the classspath, we can try to retrieve them.


Read more

Posted by on 29 April 2021 | Comments (0) | categories: Java

Collecting Java Streams


I wrote about Java Streams before, sharing how they work for me and how, in conjunction with Java's functional interfaces, they enable us to write clean(er) code. I'd like to revisit my learnings, with some focus on the final step: what happens at the tail end of a stream operation

Four activities

There are four activities around Java Streams:

  • Create: There are numerous possibilities to create a stream. The most prevalent, I found, is Collection.stream() which returns a stream of anything in Java's collection framework: Collections, Lists, Sets etc.
    There are more possibilities provided by the Stream interface, the StreamBuilder interface, the StreamSupport utility class or Java NIO's Files (and probably some more)

  • Select: You can filter(), skip(), limit(), concat(), distinct() or sorted(). All those methods don't change individual stream members, but determine what elements will be processed further. Selection and manipulation can happen multiple times after each other

  • Manipulate: Replace each member of the stream with something else. That "something" can be the member itself with altered content. Methods that are fluent fit here nicely (like stream().map(customer -> customer.setStatus(newStatus)). We use map() and flatMap() for this step. While it is perfectly fine to use Lambda Expressions, consider moving the Lambda body into its own function, to improve reading and debugging

  • Collect: You can "collect" a stream once. After that it becomes inaccessible. The closest to classic loops here is the forEach() method, that allows you operate on the members as you are used to from the Java Collection framework.
    Next are the convenience methods: count(), findAny(), findFirst(), toArray() and finally reduce() and collect().
    A typical way to use collect() is in conjunction with the Collectors static class, that provides the most commonly needed methods like toSet(), toList(), joining() or groupingBy(). Check the JavaDoc, there are 37 methods at your disposal.
    However, sometimes, you might have different needs for your code, there custom collectors shine


Read more

Posted by on 01 January 2021 | Comments (0) | categories: Java

What constitutes "good" (software) documentation?


Software ocumentation is a sticky issue and tends to escalate rather quickly in opinion matches, fighting over what is needed, what is missing and what should be different.

Looking at it closer I see 3 main dimentions:

  • audience
  • content
  • format

Each documentation artefact has a sweet spot in this cube as well as no-go zones (e.g. a business user watching a live coding recording?).

Any individual can and will fit into one or more audience, as each artefact will fit into one or more content categories. So good navigation and cross references are essential

Documentation MindMap


Read more

Posted by on 29 December 2020 | Comments (0) | categories: Software

Software distribution


Just download the binaries and run the installer. Would you need anything else for software distribution?

The rise of the AppStore

Mobile devices showed us the trend. Your Android device will load new apps from Google Play or Huawei's AppGallery (or any of the alternatives). On iOS, padOS, watchOS or tvOS, it is Apple's AppStore.

In the middle tier, Windows and macOS, the success is a mixed bag. Despite all attempts (Apple I'm looking at you), the bulk of apps are still "download and install". So each app has to implement its own update check (Unless you use Homebrew with its heritage in Linux).

In the enterprise this "poll request" approach is supplemented or surplanted by a push approach using tools like jamf (Mac only) or BigFix (cross platform).

Servers and components

On Windows there is Windows Server Update Services, which keeps your servers neat and updated and can update 3rd party software too.

On Linux package managers have been established for quite a while any you find most software in rpm, deb or Snap format. Local package managers can be set up to install packages automatically based on criteria (e.g critical updates)

For Docker there is the Hub, for Java based packages Maven central and for JavaScript based applications the NPM registry

In the enterprise environment, you can find Artifactory or Nexus Repository as well as cloud based solutions like Azure Artifacts, AWS CodeArtifact or DigitalOcean Container Registry

Your application?

With the ready availabily of repositories (App stores are nothing more than repositories with UI and billing), what does that demand from your app?

In short: make it easy to accquire and update your code

  • on mobile swallow the frog and publish to an app store
  • on desktops: if your main customers are individuals, the app store might save you the headache of a good update strategy. When companies are your target: make it jamf / WSUS friendly
  • on servers: a package repository, inlcuding the ability to deploy to a corporate repository is a must. This applies to updates too
  • components: you need a registry. Should you consider none of the established public ones as suitable, provide your own

Of course, you can consider retro a charm and be left behind

As usual YMMV


Posted by on 14 December 2020 | Comments (2) | categories: Software

Schema mapping with Java functional interfaces


Mapping one data structure into another is a never ending chore since COBOL's MOVE CORRESPONDING. One to one mappings are trivial, onnce computation is needed, clean code can become messy, really fast

Map with Functions

We will use the following, simplified, scenario with source and target formats:

{
	"FirstName" : "Peter",
	"LastName" : "Pan",
	"DoB" : "1873-11-23",
	"ToC" : "accepted",
	"Marketing" : "no"
}

Desired Result:

{
	"fullname" : "Peter Pan",
	"birthday" : "1873-11-23",
	"gdpr" : true
}

In our case only DoB has a simple mapping to birthday all others need computation or are dropped. So to keep code clean we will use a map with mapping functions, so each computation can be in its own method. The defaults 1:1 and drop functions get defined first.

final Map<String, Function<JsonObject, JsonObject>> functionList = new HashMap<>();

Function<JsonObject, JsonObject> simple(final String fieldNameIn, final String fieldNameOut) {
	return in -> new JsonObject().put(fieldNameOut, in.getValue(fieldNameIn));
}

Function<JsonObject, JsonObject> drop() {
	return in -> new JsonObject();
}

Each of the functions returns an Json object that only returns a value for the one field it gets called for. We will use a collector to aggregate the values. Since we are planning to use streams and functional interfaces, we need a helper class.

class MapHelper() {
	JsonObject source;
	Function<JsonObject, JsonObject> mapper;
	JsonObject apply() {
		return this.mapper.apply(this.source);
	}
}

MapHelper getMapHelper(final JsonObject source, final Map.Entry<String, Object> incoming) {
    MapHelper result = new MapHelper();
    result.source = source;
    result.mapper = this.functionList.getOrDefault(incoming.getKey(), drop());
    return result;
  }

Since each function will return some JSON, that needs to be merged together, we use a Java Collector to accumulate the values.


Read more

Posted by on 28 November 2020 | Comments (0) | categories: Java

Streams and Functional programming in Java


I'm late to the party embracing Streams and functional interfaces in Java. Using them for a while taught me the beauty and how things fit together nicely

Moving parts

  • At the beginning a class implementing the Stream interface emits items, that can be manipulated using map and filter operationf
  • The map and filter operations are supported by the Interfaces in java.util.function (we get to the samples later)
  • At the end the result gets "collected", in its simplest form using .forEach or, more sophisticated using a Collector with many ready baked options

What's the big deal?

short answer: clean, terse and clutter free code.

long answer: an example. Lets say you have a mammal class which gets subclassed by cat and dog (and others). You have a collection of these mamals and need to extract all dogs over weight 50. Weight is not a property of mammal. There might be null values in your collection. Classic code would look like this:

List<Dog> getHeavyDogs(final List<Mammal> mammals) {
    List<Dog> result = new ArrayList<>();
    for (int i = 0; i < mammals.size(); i++) {
      Mammal mammal = mammals.get(i);
      if (mammal != null) {
        if (mammal instanceof Dog && ((Dog) mammal).weight() > 50) {
          result.add((Dog) mammal);
        }
      }
    }
    return result;
  }

We all seen this type of code. In a functional and stream style this would look different. We have a little duck typing going on here. When a method looks like a functional interface, it can be used as this function. E.g. a method that takes one value and returns a boolean can be used as a Predicate, which comes in handy for filter operations. Another nifty syntax: you can address methods, both static and instance using the :: (double colon) syntax. So when you could use a lambda x -> this.doSomething(x) you can simply write this::doSomething and the compiler will sort it out (System.out::println anyone?)


Read more

Posted by on 06 November 2020 | Comments (2) | categories: Java

Deploying your static app to your backend repo using GitHub Actions


Our solution has two parts: a backend written in JavaScript, providing the API and a front-end created in Angular, Ionic, React or whatever is the flavor of the day. Usually you would deploy a web server to handle the URL, host the static files and have it redirect the /api URL to our backend.

However there might be reasons (or that) that we can't or don't want to access the web server and need to serve your front-end app from the /static directory of our backend.

Planning and a little YAML

Merging the two repositories seems initially an easy option, it just would break our workflows, so a different solution needs to be devised. The ask is simple:

Merging UI files into the back-end

Whenever a change happens in the main branch of the front-end application (mostly through an approved pull request), the application should be build and the result transfered to the back-end application where a pull request merges it into main. Duplicate approvals shall be avoided. So we need:

  1. Automatic build on push to main
  2. Pull / Push the bundle changes from front-end to the back-end
  3. Create a pull request and merge it in back-end

Read more

Posted by on 04 October 2020 | Comments (0) | categories: GitHub NodeJS NodeRED

Architectural Decisions


"Architecture represents the significant design decisions that shape a system,
where significant is measured by cost of change.
"

-- Grady Booch

In real architecture it is obvious, when thee foundation is completed and the stories of your building rise, there's no way to swap out the foundations without demolition and starting over.

In software architecture it is less obvious, but nevertheless similar important not to send in the demolition crew half way into delivery.

While in construction you demolition crew easily can be identified by hard hats, orange vests and heavy machinery, your software demolition crew often comes disguised as concerned stakeholder questioning fundamental decisions over and over (out of a variety of motives). So it is paramount to document your archtectural decisions well.

Decisions, Justification, Alternatives, Impact

Our typical architecture decision documentation starts, duh, with the table of content (unless that sits in a sidebar) and an overview of your architecture. One or more diagrams to provide an overview are well placed here.

Now number the decisions, so they can be refered to by their shortform (e.g AD45) rather than their full title. For larger or delicate system, you want to place each decision on their own page, not at least to be able to extract one (as PDF) for customer sign up. While it is tempting to use a word processor, I'd argue to use an engineering format like markdown or, when complexity justifies it, DITA. Document format and storage could be considered primordial architectural decisions.

Each decision needs to provide four elements:

  1. Decision
    What have you decided. A factual statement. Something along the line "Our choosen programming language is COBOL"
  2. Justification
    Outline why. It should cover features, cost, skills. You don't need to state why the alternative failed in your considerations
  3. Alternatives
    What have you looked at, what made you reject the alternative possibiliy. We need to be careful, analysis paralysis lurks here. There is always another framework, language or database you could consider. This is also the area where our "friendly" demolition crew will try to stall us
  4. Impact
    Serves are a reinforcement of Justification, but extends, when appropriate, on potential risk and its mitigation. It is an important section, nevertheless our "Reichsbedenkentr??ger" (loosely translated as "Imperial wardens of concern") lurk here. So stay concise to the point. You don't write a PHD thesis here.

Depending on the impact of the system (malefunktion threatens life, threatens assets or requires to hit reload in the browser) you need to spend more or less time on it. For a refresher on this concepts, have a look at Crystal Clear page xvi in the preface.


Read more

Posted by on 07 September 2020 | Comments (0) | categories: Software

Domino Docker and Debugging


Given that Domino once was build to run on 486 capacity of servers, Docker and Domino are posed to be a match made in heaven (eventually). Jesse shared shared his Weekend Domino-Apps-in-Docker Experimentation, Paul shared his learning points and Daniel provided the invaluable Domino on Docker build scripts. So it's time to contribute my share. The topic is slightly more exotic

Debug a Java application running on Domino in a Docker container

Before we can get cooking, we need to know what ingredients we need:

Our objective: Create a Domino image that loads the Java application from its host file system, so we do not need to rebuild the container on Java changed. An instance of this image shall allow to connect a debugger to that Java application

Foundation: the Domino image

First we have to build a Domino Docker image, configure a server using a docker volume. This has been mapped out in the domino-docker project and its slighly hidden documentation. Just a quick recap:

  • Build the image using ./build domino
  • Create a volume using docker volume create keep_data
  • Run the instance once to setup the domino
docker run -it -e "ServerName=Server1" \
    -e "OrganizationName=MyOrg" \
    -e "AdminFirstName=Doctor" \
    -e "AdminLastName=Notes" \
    -e "AdminPassword=passw0rd" \
    -h myserver.domino.local \
    -p 80:80 \
    -p 1352:1352 \
    -v keep_data:/local/notesdata \
    --stop-timeout=60 \
    --name server1 \
    hclcom/domino:11.0.1

We shut down the instance once you have confirmed it works. We don't need it thereafter, we only need the volume and image. Of course there's no harm keeping it around


Read more

Posted by on 30 June 2020 | Comments (1) | categories: Docker Domino HCL Notes

Watching the EventBus


I'm quite fond of Event-driven architecture, so to no surprise, I like vert.x's EventBus and its ability to enable polyglot programming. So it is time to have a closer look

Dem Volk aufs Maul geschaut

(That's a word play on Martin Luther loosly translated as "Watch them how they talk")

I wanted to know, what exactly is happening "on the wire", without disrupting the regular flow. Turns out, there is an easy way to do this. The vert.x EventBus provides the methods addOutboundInterceptor and addInboundInterceptor that provide you with access to a Handler with a DeliveryContext.

From there you can get to the Message or directly the message's body. So I took it for a spin in conjunction with a Websocket. This allows me to watch as the messages flow through:

final HttpServer server = this.vertx.createHttpServer();
server.websocketHandler(this::handlerWebsockets);


Read more

Posted by on 28 April 2020 | Comments (0) | categories: Java vert.x