wissel.net

Usability - Productivity - Business - The web - Singapore & Twins

By Date: February 2016

Now we are token - Authorization using JSON Web Token in Domino


After having Vert.x and Domino co-exist, the door opens for a few interesting applications of the new found capabilities. One sticky point in each application landscape is authentication and authorization. This installment is about authorization.
The typical flow:
  1. you access a web resource
  2. provide some identity mechanism (in the simplest case: username and password)
  3. in exchange get some prove of identity
  4. that allows you to access protected resources.
In Basic authentication you have to provide that prove every time in form of an encoded username/password header. Since that limits you to username and password, all other mechanism provide you in return for your valid credentials with a "ticket" (technically a "Bearer Token") that opens access.
I tend to compare this with a movie theater: if you want to enter the room where the movie plays you need a ticket. The guy checking it, only is interested: is it valid for that show. He doesn't care if you paid in cash, with a card, got it as a present or won in a lucky draw. Did you buy it just now, or online or yesterday, he doesn't care. He cares only: is it valid. Same applies to our web servers.
In the IBM world the standard here is an LTPA token that gets delivered as cookie. Now cookies (besides being fattening) come with their own little set of trouble and are kind of frowned upon in contemporary web application development.
The current web API token darling is JSON Web Token (JWT). They are an interesting concept since they sign the data provided. Be clear: they don't encrypt it, so you need to be careful if you want to store sensitive information (and encrypt that first).

Now how to put that into Domino?

The sequence matches the typical flow:
  1. User authenticates with credentials
  2. server creates a JWT
  3. stores JWT and credentials in a map, so when the user comes back with the token, the original credentials can be retrieved
  4. delivers JWT to caller
  5. Caller uses JWT for next calls in the header
It isn't rocket science to get that to work.

Read more

Posted by on 24 February 2016 | Comments (1) | categories: IBM Notes Identity Management JWT vert.x

The Cloud Awakening


It is a decade since Amazon pioneered cloud as a computing model. Buying ready made applications ( SaaS) enabled non-IT people to quickly accquire solutions IT, starved of budget, skills and business focus, couldn't or didn't want to deliver. Products like Salesforce or Dropbox became household brands.
But the IT departments got a slice of cloud cake too in form of IaaS. For most IT managers IaaS feels like the extension of their virtualization stragegy, just running in a different data center. They still would patch operating systems, deploy middleware, design never-to-fail platforms. They are in for an awakening.
Perched in the middle between SaaS and IaaS you find the cloud age's middleware: PaaS. PaaS is a mix that reaches from almost virtual machines like Docker to compute plaforms like IBM Bluemix runtimes, Amazon Elastic Beanstalk, Google Compute Engine all the way to the new Nano services like AWS Lambda, Google Cloud Functions or IBM OpenWhisk. Without closer inspection a middleware professional would sound a sigh of relief: middleware is here to stay.
Not so fast! What changed?
There's an old joke that claims, IBM WebSphere architecture allows to build one cluster to run the planet on and to survive mankind running. So the guiding principles are: provide a platform for everything, never go down. We spend time and time (and budget) on this premise: middleware is always running. Not in the brave new world of cloud. Instead of having one rigid structure that runs and runs, a swarm of light compute (like WebSphere Liberty) does one task each an one task can run on a whole swarm of compute. Instead of robust and stable these systems are resilient, summed up in the catch phrase: Fail fast, recover faster.
In a classical middleware environment the failure of a component is considered catastrophic (even if mitigated by a cluster), in a cloud environment: that's what's expected. A little bit like a bespoke restaurant that stays closed when the chef is sick vs. a burger joint, where one of the patty flippers not showing up is barley noticeable.
This requires a rethink: middleware instances become standardized, smaller, replaceable and repeatable. Gone are the days where one could spend a week installing a portal (as I has the pleasure a decade ago). The rethink goes further: applications can't be a "do-it-all" in one big fat junk. First they can't run on these small instances, secondly they take to long to boot, third they are a nightmare to maintain and extend. The solution is DevOps and Microservices. Your compute hits the memory or CPU limit? No problem, all PaaS platforms provide a scale out. Its fun to watch in test how classic developed software fails in these scenarios: suddenly the Singleton that controls record access isn't so single anymore. It has evil twins on each instance.
Your aiming at 99.xxx availability? The classical approach is to have multi-way clusters (which at the end don't do much if the primary member never goes down). In the PaaS area: have enough instances around. Even if an individual instance has only 90% availability (a catastrophic result in classic middleware), the swarm of runtimes at a moderate member count gets you to your triple digits after the dot. You can't guarantee that Joe will flip the burgers all the time, but you know: someone will be working on any given day.
And that's the cloud awakening: transit from solid to resilient*, from taking for granted to work with what is there - may the howling begin.

* For the record: How many monarchs, who had SOLID castles are still in charge? In a complex world resilience is the key to survival

Posted by on 23 February 2016 | Comments (1) | categories: Cloud Computing Container K8S Software

Designing a Web Frontend Development Workflow


In the the web 'you can do anything' extends to how you develop too. With every possible path open, most developers, me included, lack direction - at least initially. To bring order to the mess I will document considerations and approaches to design a development workflow that makes sense. It will be opinionated, with probably changing opinions along the way.
Firstly I will outline design goals, then tools at hand to finally propose a solution approach.

Design Goals

With the outcome in mind it becomes easier to discover the steps. The design golas don't have equal weight and depending on your preferences you might add or remove some of them
  1. Designed for the professional
    The flow needs to make life easier when you know what you are doing. It doesn't need to try to hide any complexity away. It shall take the dull parts away from the developer, so (s)he can focus on functionality and code
  2. Easy to get started
    Some scaffolding shall allow the developer to have an instant framework in place that can be modified, adjusted and extended. That might be a scaffolding tool, a clonable project or a zip file
  3. Convention over configuration
    A team member or a new maintainer needs to be able to switch between different projects and 'feel at home' in all of them. This requires (directory) structures, procedural steps and conventions to be universal. A simple example: how do you start your application locally? Will it be npm start or gulp serve or grunt serve or nodemon or ?
  4. Suitable for team development
    Both the product code and the build script need to be modular. When one developer is adding a route and the needed functionality for module A, it must not conflict with code another developer writes for module B. This rules out central routing files, manual addition of css and js files
  5. Structured by function, not code type
    A lot of the example out there put templates in on directory, controllers in another and directives yet into another. A better way is to group them by module, so files live together in a single location (Strongly influenced by a style guide)
  6. Suitable for build automation
    Strongly influenced by Bluemix Build & Deploy I grew fond of: check in code into the respective branch (using git-flow) and magically the running version appears on the dev, uat or production site. When using a Jenkins based approach that means that the build script needs to be self contained (short of having to install node/npm) and can't rely on tools in the global path
  7. React to changes
    A no-brainer: when editing a file, be it in an editor or an ide, the browser needs to reload the UI. Depending on the file (e.g. less or typescript) a compile step needs to happen. Bonus track: newly appearing files are handled too
  8. One touch extensibility
    When creating a new module or adding a new dependency there must not be a need of a "secondary" action like adding the JS or CSS definition to the index.html or manually adding a central route file to make that known
  9. Testable
    The build flow needs to have provisions to run unit tests, integration tests, code coverage reports, cshint, jslint, trace-analysis etc. Code that "oh it does work" isn't enough. Code needs to pass tests and style conventions. The tests need to be able to run in the build automation too
  10. Extensible and maintainable
    Basing a workflow on a looooong build script turns easily into a maintenance task from hell. A collection of chainable tasks/modules/files can keep that in check
  11. Minimalistic (on the output)
    Keep the network out of the user experience. A good workflow minimizes both the number of calls as well as the size of http transmission. While in development all modules need to be nicely separated, in production I want as little as possible css, html and js files. So once the UI is loaded any calls on the network are limited to application data and not application logic or layout
A very good starting point is John Papa's Yeoman generator HotTowel. It's not perfect: the layout overwrites the index.html on each new dependency/module violating goal #4 and it depends on outdated gulp modules - goal #10 (I had some fun when I tried to swap gulp-minify-css, as recommended, for gulp-cssnano and thereafter font-awesome wouldn't load since the minified css had a line comment in front of the font definition). I also don't specifically like/care (for my use case) to have the server component in the project. I usually keep them separate and just proxy my preview to my server project.

Tools at hand

There are quite some candidates: gulp, grunt, bower, bigrig, postman, yeoman, browserify, Webpack, Testling and many others. Some advocate npm scripts to be sufficient.
In a nutshell: there are plenty of options.
One interesting finding: Sam Saccone did research what overhead es6 a.k.a es2015 has over current es5. TypeScript (using browserify) performed quite well. This makes it a clear candidate, especially when looking at AngularJS 2
Next up: Baby steps

Posted by on 19 February 2016 | Comments (0) | categories: JavaScript Software WebDevelopment

Vert.x and Domino


A while ago I shared how to use vert.x with a Notes client, which ultimately let me put an Angular face on my inbox and inspired the CrossWorlds project.
I revisited vert.x which is now 3.2.1 and no longer beta. On a Domino Linux server (I don't have Windows) and on a Mac Notes client the JVM is 64 Bit, which makes the configuration easier (no -w32 switch, no download of an additional JVM). The obligatory HelloWorld verticle ran quite nicely with my manually. However it wouldn't run, when the Domino ran using a startup script.
The simple reason: to be able to access the Domino instance the vert.x verticle needs to run with the same user as the Domino server. su into the user doesn't do the trick - and of course you can't login into my server with the id that runs Domino. The solution was to turn to the expert and his outstanding Linux boot script. Using the /etc/sysconfig/rc*domino_config*\* file you can simply define the behavior of your Domino startup and shutdown experience. Mine looks like this (I use "domino" as my standard user, not "notes"):

rc_domino_config_domino


LOTUS=/opt/ibm/domino
DOMINO_DATA_PATH=/home/domino/notesdata
DOMINO_SHUTDOWN_TIMEOUT=600
DOMINO_CONFIGURED="yes"
BROADCAST_SHUTDOWN_MESSAGE="yes"
DOMINO_REMOVE_TEMPFILES="yes"
DOMINO_POST_STARTUP_SCRIPT=/home/domino/scripts/launch_vertx
DOMINO_PRE_SHUTDOWN_SCRIPT=/home/domino/scripts/stop_vertx

I have installed vert.x using npm using the full stack. With node.js installed, all you need is sudo npm install vertx3-full. Of course there are more conservative ways to install, vert.x, this may be an exercise left to the reader. I didn't use any of the environment variables exposed by the standard boot script to keep it independent. The script itself is just a few lines:

launch_vertx


# !/bin/sh
# Starts the vert.x tasks that talks to Domino
DOMINO_HOME=/opt/ibm/domino/notes/latest/linux
export JAVA_HOME=/usr/lib/jvm/java-8-oracle
export VERTX_HOME=/usr/lib/node_modules/vertx3-full/vertx
export DYLD_LIBRARY_PATH=$DOMINO_HOME
export LD_LIBRARY_PATH=$DOMINO_HOME
export CLASSPATH=.:$DOMINO_HOME/jvm/lib/ext/Notes.jar:$CLASSPATH
export NOTES_ENV=SERVER
vertx start com.ibm.issc.verseu.VerseLauncher -cp /home/domino/scripts/verseu.jar --vertx-id domino

The shutdown script is short and sweet. Since I used an vertx-id, I can use that to shut down the verticle without knowing or caring for its startclass name:

stop_vertx


# !/bin/sh
# Stops the vert.x tasks that talks to Domino
export JAVA_HOME=/usr/lib/jvm/java-8-oracle
export VERTX_HOME=/usr/lib/node_modules/vertx3-full/vertx
vertx stop domino

Next step: write some actual code beyond "Hello World".
As usual YMMV

Posted by on 18 February 2016 | Comments (1) | categories: IBM Notes vert.x

Developer or Coder? -Part 1


Based on recent article I was asked: " So how would you train a developer, to be a real developer, not just a coder?". Interesting question. Regardless of language or platform (maybe short of COBOL, where you visit retirement homes a lot), each training path has large commonalities.
Below I outline a training path for a web developer. I'm quite opinionated about tools and frameworks to use, but wide open about tools to know. The list doesn't represent a recommended sequence, that would be a subject of an entire different discussion:

Read more

Posted by on 13 February 2016 | Comments (0) | categories: Software

Disecting a mail UI


At Connect 2016 Jeff announced that there will be an IBM Verse Client for Domino on premises. Domino customers are used to an high amount of flexibility, so the tempation will arise to customize the Verse experience.
However this ability is nothing that has been announced in any IBM roadmap. So any considerations are purely theoretical. What is the interface of Verse made of. WIthout reverse engineering, just by looking at it, one could come to the following conclusion:
A possible structure of the VerseUI
This looks quite managable and lean. Now if all these components are Widgets (if build in Dojo) or Directives (if build in AngularJS) or Components (an upcoming HTML standard) it wouldn't be too hard to envision customization abilities.
Using the excellent Tilt Extension for Firefox one can visualize how the different web clients are structured.

Read more

Posted by on 12 February 2016 | Comments (0) | categories: IBM Notes