wissel.net

Usability - Productivity - Business - The web - Singapore & Twins

Streams and Functional programming in Java


I'm late to the party embracing Streams and functional interfaces in Java. Using them for a while taught me the beauty and how things fit together nicely

Moving parts

  • At the beginning a class implementing the Stream interface emits items, that can be manipulated using map and filter operationf
  • The map and filter operations are supported by the Interfaces in java.util.function (we get to the samples later)
  • At the end the result gets "collected", in its simplest form using .forEach or, more sophisticated using a Collector with many ready baked options

What's the big deal?

short answer: clean, terse and clutter free code.

long answer: an example. Lets say you have a mammal class which gets subclassed by cat and dog (and others). You have a collection of these mamals and need to extract all dogs over weight 50. Weight is not a property of mammal. There might be null values in your collection. Classic code would look like this:

List<Dog> getHeavyDogs(final List<Mammal> mammals) {
    List<Dog> result = new ArrayList<>();
    for (int i = 0; i < mammals.size(); i++) {
      Mammal mammal = mammals.get(i);
      if (mammal != null) {
        if (mammal instanceof Dog && ((Dog) mammal).weight() > 50) {
          result.add((Dog) mammal);
        }
      }
    }
    return result;
  }

We all seen this type of code. In a functional and stream style this would look different. We have a little duck typing going on here. When a method looks like a functional interface, it can be used as this function. E.g. a method that takes one value and returns a boolean can be used as a Predicate, which comes in handy for filter operations. Another nifty syntax: you can address methods, both static and instance using the :: (double colon) syntax. So when you could use a lambda x -> this.doSomething(x) you can simply write this::doSomething and the compiler will sort it out (System.out::println anyone?)


Read more

Posted by on 06 November 2020 | Comments (0) | categories: Java

Deploying your static app to your backend repo using GitHub Actions


Our solution has two parts: a backend written in JavaScript, providing the API and a front-end created in Angular, Ionic, React or whatever is the flavor of the day. Usually you would deploy a web server to handle the URL, host the static files and have it redirect the /api URL to our backend.

However there might be reasons (or that) that we can't or don't want to access the web server and need to serve your front-end app from the /static directory of our backend.

Planning and a little YAML

Merging the two repositories seems initially an easy option, it just would break our workflows, so a different solution needs to be devised. The ask is simple:

Merging UI files into the back-end

Whenever a change happens in the main branch of the front-end application (mostly through an approved pull request), the application should be build and the result transfered to the back-end application where a pull request merges it into main. Duplicate approvals shall be avoided. So we need:

  1. Automatic build on push to main
  2. Pull / Push the bundle changes from front-end to the back-end
  3. Create a pull request and merge it in back-end

Read more

Posted by on 04 October 2020 | Comments (0) | categories: GitHub NodeJS NodeRED

Architectural Decisions


"Architecture represents the significant design decisions that shape a system,
where significant is measured by cost of change.
"

-- Grady Booch

In real architecture it is obvious, when thee foundation is completed and the stories of your building rise, there's no way to swap out the foundations without demolition and starting over.

In software architecture it is less obvious, but nevertheless similar important not to send in the demolition crew half way into delivery.

While in construction you demolition crew easily can be identified by hard hats, orange vests and heavy machinery, your software demolition crew often comes disguised as concerned stakeholder questioning fundamental decisions over and over (out of a variety of motives). So it is paramount to document your archtectural decisions well.

Decisions, Justification, Alternatives, Impact

Our typical architecture decision documentation starts, duh, with the table of content (unless that sits in a sidebar) and an overview of your architecture. One or more diagrams to provide an overview are well placed here.

Now number the decisions, so they can be refered to by their shortform (e.g AD45) rather than their full title. For larger or delicate system, you want to place each decision on their own page, not at least to be able to extract one (as PDF) for customer sign up. While it is tempting to use a word processor, I'd argue to use an engineering format like markdown or, when complexity justifies it, DITA. Document format and storage could be considered primordial architectural decisions.

Each decision needs to provide four elements:

  1. Decision
    What have you decided. A factual statement. Something along the line "Our choosen programming language is COBOL"
  2. Justification
    Outline why. It should cover features, cost, skills. You don't need to state why the alternative failed in your considerations
  3. Alternatives
    What have you looked at, what made you reject the alternative possibiliy. We need to be careful, analysis paralysis lurks here. There is always another framework, language or database you could consider. This is also the area where our "friendly" demolition crew will try to stall us
  4. Impact
    Serves are a reinforcement of Justification, but extends, when appropriate, on potential risk and its mitigation. It is an important section, nevertheless our "Reichsbedenkentr??ger" (loosely translated as "Imperial wardens of concern") lurk here. So stay concise to the point. You don't write a PHD thesis here.

Depending on the impact of the system (malefunktion threatens life, threatens assets or requires to hit reload in the browser) you need to spend more or less time on it. For a refresher on this concepts, have a look at Crystal Clear page xvi in the preface.


Read more

Posted by on 07 September 2020 | Comments (0) | categories: Software

Domino Docker and Debugging


Given that Domino once was build to run on 486 capacity of servers, Docker and Domino are posed to be a match made in heaven (eventually). Jesse shared shared his Weekend Domino-Apps-in-Docker Experimentation, Paul shared his learning points and Daniel provided the invaluable Domino on Docker build scripts. So it's time to contribute my share. The topic is slightly more exotic

Debug a Java application running on Domino in a Docker container

Before we can get cooking, we need to know what ingredients we need:

Our objective: Create a Domino image that loads the Java application from its host file system, so we do not need to rebuild the container on Java changed. An instance of this image shall allow to connect a debugger to that Java application

Foundation: the Domino image

First we have to build a Domino Docker image, configure a server using a docker volume. This has been mapped out in the domino-docker project and its slighly hidden documentation. Just a quick recap:

  • Build the image using ./build domino
  • Create a volume using docker volume create keep_data
  • Run the instance once to setup the domino
docker run -it -e "ServerName=Server1" \
    -e "OrganizationName=MyOrg" \
    -e "AdminFirstName=Doctor" \
    -e "AdminLastName=Notes" \
    -e "AdminPassword=passw0rd" \
    -h myserver.domino.local \
    -p 80:80 \
    -p 1352:1352 \
    -v keep_data:/local/notesdata \
    --stop-timeout=60 \
    --name server1 \
    hclcom/domino:11.0.1

We shut down the instance once you have confirmed it works. We don't need it thereafter, we only need the volume and image. Of course there's no harm keeping it around


Read more

Posted by on 30 June 2020 | Comments (1) | categories: Docker Domino HCL Notes

Watching the EventBus


I'm quite fond of Event-driven architecture, so to no surprise, I like vert.x's EventBus and its ability to enable polyglot programming. So it is time to have a closer look

Dem Volk aufs Maul geschaut

(That's a word play on Martin Luther loosly translated as "Watch them how they talk")

I wanted to know, what exactly is happening "on the wire", without disrupting the regular flow. Turns out, there is an easy way to do this. The vert.x EventBus provides the methods addOutboundInterceptor and addInboundInterceptor that provide you with access to a Handler with a DeliveryContext.

From there you can get to the Message or directly the message's body. So I took it for a spin in conjunction with a Websocket. This allows me to watch as the messages flow through:

final HttpServer server = this.vertx.createHttpServer();
server.websocketHandler(this::handlerWebsockets);


Read more

Posted by on 28 April 2020 | Comments (0) | categories: Java vert.x

SimpleXMLDoc revisited


It is 2020, JSON is supposed to have won, with a challenger in sight. XML with its fine distinction between Elements, Attributes and clear ownership demarked by name spaces, was supposed to be gone. But OData made it necessary to look again, as did CalDAV

Out into the OutputStream

The initial version was introduced in the context of XAgents which mandated an OutputStream. I find that adequate and useful, so I kept that. If you just want a String, a ByteArrayOutputStream will do quite nicely

Fluent methods

The big change to the revamped version is the addition of a fluent API. Each method call returns the object instance itself, so you can chain your document creation to look modern (and type less)

Namespace and attributes

Originally I though "simple" would be sufficient to create Elements only. But as time goes by one starts to appreciate name spaces and Attributes, so I added support for these too. To keep things simple: once we specify the namespace at the beginning of the document, we can simply refer to it by its alias name.

A sample:

    final ByteArrayOutputStream out = new ByteArrayOutputStream();
    final SimpleXMLDoc doc = new SimpleXMLDoc(out);
    doc.addNamespace("X", "https://xmen.org")
    .addNamespace("", "https://whyOhWhy.com/xml")
    .setXmlStyleSheet("somestle.xslt")
    .openElement("Endpoints")
    .openElement(doc.element("X:Endpoint")
          .addAttribute("name", "A Name")
          .addAttribute("url", "http://anywhere/")
          .addAttribute("meta", "meta not metta"))
     .closeElement(1)
     .addSimpleElement("description", "Something useful")
     .closeDocument();
    System.out.println(out.toString());

Key methods

  • addNamespace: adds one name space and establishes the alias. To keep it simple, namespaces are defined only at the beginning fo the document
  • setXmlStyleSheet: Same here, needs to be defined at the beginning - after all this class streams the result and stylesheets only start at the beginning
  • OpenElement starts a new XML Element. When provided with a string, it is an attribute free element, that can include the namespace abbreviation. When using a doc.element, we can add attributes
  • addSimpleElement: add an element, its String content and close it
  • closeElement: write out a number of closing tags. It deliberately uses number of tags and not tag names, so you don't need to track the names you have opened. Ensures that XML stays valid
  • closeDocument: closes all remaining Elements in the correct sequence and closes the document. Can be called once only

Check the full source code for details

As usual YMMV


Posted by on 13 April 2020 | Comments (0) | categories: Java XML

vert.x and CORS


One of the security mechanism for AJAX calls is CORS (Cross-Origin Resource sharing), where a server advice a browser if it can request resources from it, coming from a different domain.

It is then up to the browser to heed that advice. To complicate matters: when the browser wants to POST data (or other similar operations), it will go through a preflight request adding to site latency.

I have to admit, I never fully understood the rationale, since only browsers adhere to CORS, any webserver, Postman or CURL ignore CORS happily.

None, One or All, but not Some

There's another trouble with CORS: The specification only allows for no-access, all-access (using * as value for Access-Control-Allow-Origin, with restrictions) or one specific domain, but not a list of domains.

Mozilla writes

Limiting the possible Access-Control-Allow-Origin values to a set of allowed origins requires code on the server side to check the value of the Origin request header, compare that to a list of allowed origins, and then if the Origin value is in the list, to set the Access-Control-Allow-Origin value to the same value as the Origin value.


Read more

Posted by on 07 April 2020 | Comments (1) | categories: Salesforce Singapore

My Maven starter template


Maven is to Java what npm is to JavaScript. It can be a harsh mistress or your best companion. It depends

Beyond dependencies and builds

Maven removes the need to download and manages your dependencies. Unfortunately it doesn't come with mvn install <packagename> like npm (or I haven't learned that yet), so keeping that pom.xml current is a little PITA. However once we make peace with it, the power of plugins makes development in auto-pilot a breeze. Some of the things you can do:

  • Generate a project site
  • Generate various reports: code quality, code coverage
  • Run unit tests

Check out the complete list to get an idea. I'm specifically fond of the site generation capability. It allows us to keep your documentation in the same repository as the project, so we have one place less to worry about.

We simply add /src/site/ to our project and content can be created in multiple formats. My favorite one is Markdown. Besides my handcrafted pages, I generate reports:

  • Issue management
  • Licenses
  • Plugins
  • Source code location
  • Team
  • JavaDoc
  • PMD and CPD
  • Surefire (Test results) and JaCoCo (Test coverage)

All this involved a bit of boilerplate in the pom.xml so I keep a template around,


Read more

Posted by on 06 April 2020 | Comments (0) | categories: Java WebDevelopment XML

eMail etiquette - the 60ties are calling


With WFH being en vogue these days, not only video conferencing and chat, but also eMail.
Dating back to the 1960ties, we had 6 decades to develop etiquette, which seems tobe lost to current users, so here we go again

Addressing

eMail has To, CopyTo (also called CC for Carbon Copy) and BCC (Blind Carbon Copy) as a means of addressing people. They serve distinct purposes:

  • TO: This is the person (or people) we want to act on our message, do something, reply etc. A good email has only few names, ideally one. If we have an ongoing eMail thread that involves multiple actors, we most likely use the wrong channel and are better of using collaborative software like HCL Connections, HCL Sametime, Slack, Teams or Chatter
  • CC: People, we think, who should be keept in the loop. We don't expect any action or reaction of them. A lot of eMail veterans automatically route those messages to a low priority place
    -- BCC: all receipients here get the message and the rest of the addressies won't know. I used to call it the "mobbing copy". BCC is especially fun when someone there hits "reply all" and reveals the readership. There are few legitimate uses for this. One is distribution lists (see below), the other archive/record keeping. Our external readers don't need to know that your compliance archive has the eMail address compliance@acme.com If we really want someone outside the visible thread to take note - forward the message

Subject line

It is like a tweet about our content. The subject needs to justify why it is worth the time and attention to open it. So "Status", "Report" or "Important" don't cut it. Common practise we can see are qualifiers, e.g Opportunity codes or project IDs at the beginning. Something like [T3453] - makes it easier to filter.

The biggest competitor to inbox attention by subject is the sender identity. We probably open a message one or two reporting managers up even with bad subject lines.

Content

Let's keep it crisp and short, best below 5 sentences.

We state:

  • the information we want to provide
  • the exact ask what action we expect, from whom and when
  • name the person "team please look into... " doesn't cut it and is an indicator of a broken process

If there is a lot of information, it might better live in a Wiki, a project place or even a file share. We then provide the news cast overview and a link - Would you like to know more?

There are some interesting cultural differences. In Anglosaxon or Eastern culture we would politely address the person and add a whiff of smalltalk, something along the lines "hope that finds you well". Germans, Dutch and other Nordics consider this a waste of space and time and consider it as the ultimate courtesy to cut through the chase and get to the point.

When we address close co-workers, who value efficiency, it even is OK to skip the greeting. We need to dread carefully here, it needs to be clarified otherwise it is seen ultra rude.

Replies

Do we reply to the sender or all of the addressies together. It seems to be the default for many "replyToAll". This is especially hillarious when a distribution list sneaked on the addressies. The rationale here is: the sender wanted to keep all these in the loop, so I won't break it. For a small group, I hit replyAll, for larger ones only reply.

I would wish the eMail software would warn when you blast a reply. The guardian agrees: don't replyAll.

A special mention: cherry-picking replies. We hit reply all and remove the mailing lists - good. We just remove the project manager we compete with - bad. So we need to be careful of the ramifications. Other receipients might wonder: why are Jane and Joe no longer in this conversation?

Distribution lists

They firmly belong in BCC - avoids ReplyAll armageddon. When we use private distribution list, we need to make sure, they resolve before sending otherwise people can't reply. However - most likely - that group of people would be better served with a shared channel. A good strategy: we put it in BCC, write a two sencent summary and provide a link to the full info. Co-workers who are not into eMail will find in in their [insert the collaborative tool you use]

As usual YMMV


Posted by on 27 March 2020 | Comments (1) | categories: GTD Intercultural

Running Java applications with Notes on macOS


My main work computer is a MacBook running macOS. Thank to Logitech, they keyboard is just fine. As you know macOS neither features Domino Designer or Domino Admin. For recent development I wanted to make sure that my applications can run on the client (I've done that before).

Java, Java on the wall, what's the right path of them all?

(Sorry Schneewitchen)

In Notes/Domino R9.0.1FP8 - (client FP10) the Java runtime was updated to Java8 and in R11 changed to AdoptOpenJDK.

On macOS that led to a particular situation. The jvm packaged with the HCL Notes.app can be found in the path
HCL Notes.app/jre/Contents/Home with bin, lib and lib/ext as we can expect from a JVM. Suspiciously absent are Notes.jar, websvc.jar and njempcl.jar. They can be located at HCL Notes.app/Contents/MacOS.jvm/lib/ext.

While this isn't an issue for the Notes client, it is an obstacle when you try to run an external jar file. java -jar somejar.jar ignores any classpath setting outside the JVM and only loads resources from the default JBM path (lib/ext).

I suspect the separation was neccesary due to the AdoptOpenJDK distribution rules.

To solve this, we can use a start script that creates a symbolic link in the right place. Takeing the usual suspects like DYLD_LIBRARY_PATH and LD_LIBRARY_PATH into account we end with a script like this:

#!/bin/bash
# MacOS Keep Starter file
# Keep locations - update as needed - leave the TLS stuff empty if you don't have it
export KEEPJAR=$HOME/keep/projectkeep.jar
export LOG_DIR=$HOME/keep/logs
export TLSFile=$HOME/keep/private/demoserver.projectkeep.io.pfx
export TLSPassword=supersecret

# Don't change anything below unless you are sure what you are doing
# Java files places unfortunately troublesome, so we link some
cd /Applications/HCL\ Notes.app/jre/Contents/Home/lib/ext
export SRCDIR="../../../../../Contents/MacOS/jvm/lib/ext"
if [ ! -f njempcl.jar ]; then
	ln -s $SRCDIR/njempcl.jar .
    echo "Linked njempcl.jar"
fi
if [ ! -f Notes.jar ]; then
	ln -s $SRCDIR/Notes.jar .
    echo "Linked Notes.jar"
fi
if [ ! -f websvc.jar ]; then
	ln -s $SRCDIR/websvc.jar .
    echo "Linked websvc.jar"
fi

# Local Keep Server
export DEBUG=true
export PATH=/Applications/HCL\ Notes.app/Contents/MacOS:$PATH
export JAVA_HOME=/Applications/HCL\ Notes.app/jre/Contents/Home
export GodMode=true
export DYLD_LIBRARY_PATH=/Applications/HCL\ Notes.app/Contents/MacOS
export LD_LIBRARY_PATH=/Applications/HCL\ Notes.app/Contents/MacOS
echo $LD_LIBRARY_PATH ..
cd $HOME/Library/Application\ Support/HCL\ Notes\ Data
/Applications/HCL\ Notes.app/jre/Contents/Home/bin/java -jar $KEEPJAR
cd ~
echo Done!

This script presumes thay we have admin permissions. We could contemplate to check for existing symbolic links and remove the ones we did set. I decided, that smells too much like YAGNI.

As usual: YMMV


Posted by on 17 March 2020 | Comments (0) | categories: HCL Notes Java macOS