Usability - Productivity - Business - The web - Singapore & Twins

By Date: November 2014

Poking around the iNotes HTTP API (Part 2) - Fun with Rhino

The iNotes HTTP API wasn't designed for consumption outside the iNotes web client. This becomes apparent when looking at Form=l_GetOutline_JSON or Form=l_JSVars that return JavaScript and not JSON! The difference? JSON contains data only, while JavaScript contains function definitions too.
To make use of data included in the JavaScript there are 2 possibilities:
  • do some elaborate String operation
  • actually run JavaScript
Since Java 6, the JDK contains a JavaScript engine, that originally was developed as Mozilla Rhino. The most recent incarnation in Java 8 is Nashorn (which is German for Rhino). When you use an older JDK, think "older JS", nice functions like JSON.stringify or JSON.parse are missing and need to be added from elsewhere.
Turning the outline information into something Java useful we need to parse the returned JSON, lets see how that works. The Outline is a little more complex that it might seem. The JSON delivers 6 values (in an array) about the outline entries:
  1. The hierarchical sequence number. Something like 1 or 3.1 or 3.2.1 - depending on the level. The numbers have gaps where entries in the Outline are hidden from the web
  2. The hierarchy level starting with 0. So Asia\South-East\Singapore would be level 2
  3. The Label of the entry. Can be different from the actual name of the element
  4. The full path to the image for that entry. We'll strip that to get the name without a path, so we can use our own icons
  5. The full URL to that entry. It has the View Label (Name) and its UNID or system name in the string. We need the UNID for the unread count, so we'll extract that
  6. Can something be dragged there. 1 = Yes, it is a real folder, 0 = probably not, it is a view or a label only. "Probably" since e.g. you should be able to drag into the "Delete" view

Read more

Posted by on 27 November 2014 | Comments (0) | categories: IBM Notes

Poking around the iNotes HTTP API (Part 1)

With the rise of web applications, something interesting happened: applications gained an observable API. Delivered over HTTP(s) with a few commands (GET, POST etc.) it became easier to find out " what is going on". Tools like Wireshark, Fiddler, Charlesproxy or TCPMon sit between your HTTP consuming client and the HTTP generating server. In browser applications the build in developer tools (Ctrl-Shift-I) provide the same level of access.
So anything flowing over HTTP is effectively an API. This led to the idea of the API economy (but that's a story for another time). Intercepting HTTP(s) traffic can be easier and more fun, than trying to customise customise a backend you might not have access to. (Of course there is evil interception too). Point is case, iNotes is quite a stack:
3d view of the iNotes client

So I poked around the iNotes API, this is what I found:

Read more

Posted by on 24 November 2014 | Comments (1) | categories: IBM Notes

Flashcards - German Version: Die Lernkartei

The offspring and SWMBO had an argument about the need for Chinese tuition when attending Singapore's oldest school. You can guess who had what position. Disecting the positions and looking at interest (you remember your basic negotiation skills , do you?) it boiled down to the wish of self directed learning vs. fear of failure.
Having gone through both (I remember my Latin tuition teacher well. I loathed him and in return he pushed me within 6 month from F to A), I concluded: self direction needs skills. So one task to complete is Learning how to learn. To be very clear: no technique can replace the I want to learn, but once the want is established, being skilful helps (and reinforces the want). Cousera even has a Learn how to learn class online.
One tool, well established for learning facts (like vocabulary), are flash cards (there are others). Interestingly the Germans put some rigor into the general idea of sets of flash cards using a Lernkartei. The Lernkartei uses a spaced repetition approach proposed as early as 1885 and made popular by Sebastian Leitner.
The Learnbox as proposed by Sebasitan Leitner
In a nutshell:
At stage 1 the learner adds new learning topics (vocables, phrases, facts) in bundles of 6-10 with up to 30 cards a day. These cards are reviewed and learned until they are memorised for the session. They then go into stage 2. When reviewing stage 2 cards, successful memorised cards go to stack 3 and failed cards back to stack 1. A successful review on stack 3 promotes the cards to stack 4, while a failure demotes it to stack 1. Stack 5 cards are considered retired and are archived after one successful review.
The key of the system is the ever increasing interval between the reviews: stage 1 is reviewed daily, stage 2 every 3 days, stage 3 every 5, stage 4 every 7 and stage 5 just occasionally. There are other interval suggestions available too.
You might have guessed: that system has been put into software, more than once. One popular application is Anki (which uses the SuperMemo algorithm). Others add a social component.
All application I glanced at made one basic assumption: there is one question with one answer (which might be reversible). However reality for vocabulary is a little more complex:
A word has more than one dimension
\* only limited suitability as answer
So what to do? Write a multi-facetted flash card app? A flashy diamond perhaps?

Read more

Posted by on 23 November 2014 | Comments (1) | categories: Learning

Last usage of a mail file - for all users

My admin is getting a little rusty. When I was asked: " How can I indentify a dormant mailbox?" I couldn't name a place in admin where to look. Of course, there is the NotesDatabase.LastModified property, but that would get updated on a design refresh too. So I asked innocently: " dormant, how?" After an amusing 2 hour discussion between the stake holders (only the popcorn was missing) we concluded that it is either
  • No document has been created for a while (including incoming)
  • No email has been sent for a while
Calendar entries or todo were not part of the consideration, but that's just a question in what view you look. For "any" document it is ($All) and for sent documents ($Sent). The LotusScript code is just a few lines. I needs to be run, so it can access all mailboxes.

Read more

Posted by on 21 November 2014 | Comments (4) | categories: IBM Notes

Providing user information in JSON

In the MUSE project we encountered the need to retrieve user information in JSON format. Easy done one would think. The trouble starts, when you have multiple directories and you need reasonable speed. Sometimes falling back to @Fomulas gives you what you need, fast and easy. @NameLookup knows where to look and you don't need any extra configuration. A simple call to an form will give you all you need: for yourself or add &User=John Doe for any other user. This will return:

"QueryName": "John Doe",
"NotesName": "CN=John Doe/OU=ThePitt/O=GIJoe",
"AllNames": [
"CN=John Doe/OU=ThePitt/O=GIJoe",
"John Doe/ThePitt/GIJoe",
"John Doe",
"eMail": "john.doe@thepitt.com",
"MailDomain": "SACMEA",
"MailServer": "CN=PittServer42/OU=ThePitt/O=GIJoe",
"MailFile": "mail/jdoe.NSF",
"Empnum": "0815",
"Empcc": "4711"

The form makes extensive use of @NameLookup and looks in DXL quite simple.

Read more

Posted by on 20 November 2014 | Comments (1) | categories: IBM - Lotus IBM Notes XPages

Building a shared approval frontend in XPages

The saying goes: " God would not have been able to create the world in 7 days, if there would have been an installed base to take care of". As much as we wish to have divine powers, we need to make with less and look after an installed base. Point in case: You have a set of approval applications in classic Notes client code, like: Travel, Leave, Expenses, Gifts, Training, BoM changes etc. You are challenged to provide a web and/or mobile interface for them.
Considering your options you have to decide between options:
  1. Dump the applications and rebuild them on a new platform. Hope that it will take long enough, so nobody will ask you to migrate any of the existing data
  2. Enable them one-by-one
  3. Hire Redpill to do an asymmetric modernization
  4. Build a lateral approval screen across your applications and leave the full web enablement for later
While #1 would allow you to silently dump all your technical debt, it will take too long and a potential data migration monster would lurk in the dark. #2 is what non-techies would expect, but again you spend too much time and you will carry forward new technical debt. After all the applications are more similar than different. Hiring Redpill would be a sensible option, but that requires board approval or is against your "we-do-it-inhouse" policy or you simply want to expand your skill horizon.
This leaves you with #4. To do this successful, you need to define your scope clearly. In the rest of the article I will work with the following assumptions, if they don't fit your situation, adjust your plan:
  • The goal is to provide the decision making screen only, not a complete workflow engine (it could evolve to that)
  • Request submission happens in the existing application, providing a new interface for that is outside the current scope
  • The flow logic (how many approvers per level etc.) is determined on submission by the original application
  • Someone from a different department indicated interest to use the approval screen from a complete different system (that is the "scope creep" that happens in any project)
When you look at workflow systems, while planning your application, you will notice, that there are 2 types of data: the flow state/sequence and the specific request data. The flow state is stuff like: what's the current status, what are the potential decisions (usually: Yes, No, More info please), who are the past, current, future approvers. The specific request data is stuff like: duration and type of leave, price of items to purchase, customer involved etc.
In your existing applications the two are stored together (and will stay like that), but you need to extract the flow data to your new approvalCentral application. On the back of a napkin you draw this sequence:
Your approval workflow
(Image created with JS Sequence Diagrams and postprocessed with InkScape)

The green part of the application exists, the first question is: how to design the submission. There are a number of options you could choose from:
  • Have the Approval Central application poll participating applications on schedule. That option is least efficient, but might be your only choice if you can't touch the existing apps at all (happens more often than you think, when admins are paranoid policies are set very cautious
  • Have each application create a flow document directly in the workflow central application. You could use a inherited LotusScript library for that. While that might have been the prefered choice a decade ago, it has two problems: first you limit your participating applications to Notes only (and there are other applications with decision requirements) and you would expose the inner workings of your application prematurely
  • Use a web service interface to implement Contract first development. While REST is the current IT fashion, you pick SOAP. The simple reason: LotusScript can talk SOAP quite well via the Webservice consumer design element, while REST would require your notification code being written in Java. So a SOAP service it is

Read more

Posted by on 17 November 2014 | Comments (1) | categories: XPages

{{Mustache}} Helper for Domino and XPages

Previously we had a look, how to use Mustache with the CKEditor to have an editing environment for templates. I glossed over the part where to store and how to use these templates. Let me continue there.
I'll store the template in Notes documents and use an application managed bean to transform a document (and later other things) using these templates.
I have 2 use cases in mind: one is to allow the configuration of HTML notification messages in applications and the other to configure the display of workflow details (more on Workflow in a later post). The approach has two parts:
  1. loading (or reloading) the existing templates
  2. using the templates
The first one should need to run only once in the application lifecycle. Since managed beans can't have constructors with parameters, I created a seperate load function, so inside the bean there is nothing that depends on the XPages runtime environment. You therefore can use that bean in an agent, application or a plug-in.
I also opted to load all templates into memory at once, but only compile them when actually needed. If you plan to have lots of them, you want to look for a different solution.
The class that does the work is the MustacheHelper.java. To make it easy to use, I wrap it into a managed bean and a small SSJS Helper function:

<?xml version="1.0" encoding="UTF-8"?>
    <!--  In a production system that should be application -->
  <!--AUTOGEN-START-BUILDER: Automatically generated by IBM Domino Designer. Do not modify.-->
  <!--AUTOGEN-END-BUILDER: End of automatically generated section-->

function applyTemplate(doc, templateName) {
 try {
  // Check for init
  if (!Mustache.isInitialized()) {
   var templateView:NotesView = database.getView("templates");
   Mustache.initFromView(session, templateView);
  return Mustache.renderDocumentToString(templateName,doc);
 } catch (e) {
  return e.message;

With this done transforming a document becomes a one liner in SSJS: return applyTemplate(document1.getDocument(),"Color");. To demo how it works, I created a sample database.

Read more

Posted by on 16 November 2014 | Comments (0) | categories: XPages

Karma and Wealth

Karma has gotten some attention lately, while I muse it for some time. The common short explanation is " What goes around comes around" or in other words: " Every action (or inaction) you take has consequences, you ultimately will be confronted with". We are reminded by various spiritual traditions , that nothing good can come from a bad deed and that good deeds will yield (in mysterious ways, somewhen) good results.
This is where the trouble starts. Our and the mystic perceptions what is "good" and what is "bad" differ greatly. Pop literature links wealth, affluence and influence to "good Karma". Someone poor or suffering easily gets dismissed as "(s)he has bad Karma".
I think that misses the point completely.
Taking a step back: Buddhist (and others in similar ways) believe we are trapped in a cycle of Samsara that leads us through many lifetimes. Ending Samsara and suffering is the goal of enlightenment (I'm simplifying here). The main force holding us in Samsara is Maya: the illusion of existence. Now adding nice things to our life binds us deeper to Maya, making liberation more remote, so I doubt that this is a good thing per se. And happiness somehow works differently anyway.
If luxury would be the answer, the road to enlightenment would lead through the god realm, which Buddhists believe is a detour. Some scholars argue that our western civilization is god realm like (I'll add a link when I rediscover it) and there's leisure displaying compassion and joyful effort.
Looking at it from a different angle might explain it better: The currency of Karma is compassion. Compassion for all living beings. That includes yourself, so no point moving under the bridge since "it isn't real". Looking after yourself is a requirement, so you can sustainable look after others. IMHO Good Karma is what makes it easier to be compassionate. Your good deeds will make it easier for you in the future to make good deeds. Any ulterior motive might disqualify your actions as good deeds. So if you think improvements in your financial situation are the result of good Karma, you mix cause and effect (which anyway only exist interdependent). An improvement in your personal situation isn't the reward, but the enablement for greater compassion - and it makes you happy for a while, happy people are contagious.
This also reconciles Karma and free will: Contrary to the common perception "It was Karma, that this happened" you end up with "Life offered a situation, I made a decision, now I'm presented with the consequences". Of course all consequences turn into offerings of situations. I think it is a folly to conclude: hardship is automatically an indication of bad Karma (it might).
The best analogy: there is a weight of 100kg you are supposed to lift (quite a hardship for most of us)! So what's the conclusion? Bad Karma? Nope: if you have trained hard, that might be the final test and reward that you mastered your training and you will lift it. Same with life: a difficult situation could be anything: a result of a bad deed or an invitation to show your skilfulness: maintain compassion no matter what.
In the words of beloved teacher: " Life dealt you cards, you make your Karma how you play them"

Posted by on 16 November 2014 | Comments (0) | categories: After hours

A peek in my JavaScript Toolbox

Every craftsman has a toolbox, except developers: we have many. For every type of challenge we use a different box. Here's a peek into my web front-end programming collection. It works with any of your favorite backends. In no specific order:
  • AngularJS

    one of the popular data binding frameworks, created by Google engineers. With a focus on extensibility, testability and clear seperation of concerns it allows to build clean MVC style applications
  • Data Driven Documents

    short: D3JS. If anything needs to be visualized d3js can deliver. Go and check out the samples. There are a set of abstractions on top of it that make things simpler. I consider d3js the gold standard of what is possible in JS visualizations
  • Mustache

    Logicless templating for any language. I use it where I can't / wont use AngularJS' templating
  • PivotTable.js

    We love to slice and dice our data. Instead of downloading and spreadsheet processing them, I use this JavaScript library.
  • Angular-Gantt

    Timeline bound data loves gantt charts. This components makes it easy to visualize them
  • TemaSYS

    A wrapper around WebRTC. It allows to add voice and video to your application in an instant, no heavy backend required
  • prediction.io

    PredictionIO is an open source machine learning server for software developers to create predictive features, such as personalization, recommendation and content discovery. Competes with IBM's Watson
  • Workflow

    I'm not a big fan of graphical workflow editors. You end up spening lots of time drawing stuff. I'd rather let the system do the drawings
    • Sequence Diagrams
      Visualize how the flow between actors in a system flows. Great to show who did what to whom in Game of Thrones
    • JS Flowchart
      Visualize a flow with conditional branches. I contributed the ability to color code the diagram, so you can show: current path, branches not taken, current step and undecided route. (there are others)
  • Reporting

    Reports should be deeply integrated into the UI and not being standalone.
  • Card UI

    While not exactly JavaScript, designing with cards is fashionable. I like Google's material design explaining cards
    • Bootcards
      Twitter Bootstrap meets cardUI. Lots of quality details to generate a close to native experience
    • Swing
      Swipe left/right for Yes/No answers
  • Tools

    I haven't settled for an editor yet. Contestants are Geany, Eclipse (with plug-ins), Webstorm, Sublime or others. Other tools are clearer:
    • JSHint
      Check your JavaScript for good style
    • Bower
      JavaScript (and other front-end matters) dependency management. It is like mvn for front-ends
    • Grunt
      A JavaScript task runner. It does run a preview server, unit tests, package and deployment tasks. Watching its competitor Gulp closely
    • Yeoman
      Scaffolding tool to combine, grunt, bower and typical libraries
    • Cloudant
      NoSQL JSON friendly database. Works with Hood.ie for offline first and a JavaScript browser database
    • GenyMotion
      Fast Android emulator

Posted by on 14 November 2014 | Comments (2) | categories: Software

Custom experience for IBM Connections Cloud - Project Muse

The old saying goes: "You can't have your cake and eat it too". When organisations move computing from servers they control into a SaaS (the artist formerly known as ASP) environment, they swap customisability for configurability and standardisation. The idea is, that a vendor controlled cloud environment benefits from both the economy of scale as well as the frequent updates the claimed DevOps model brings.
But you can have both. One of IBM's best secret are the assets that the IBM Software Services for Collaboration (ISSC) has been and is building. One of my all time favourites had been Atlas for Connections which predated Watson Analytics by half a decade.
Now I have a new darling: IBM ISSC Project Muse. This is the internal code name, no official name has been set or any decision been made to make this an official offering. However you can ask ISSC nicely, and they will use Muse technology in your project (that you awarded to them/us).
What does it do?
IBM Connections, in both on-premises and cloud is build around a set of APIs. These https APIs give and take XML and/or JSON. On top of them sits the regular UI. That UI in the cloud is only to a small extend customizable or extendable. The Muse engine therefore talks directly to the API and renders an alternate user experience. This alternate experience can include custom application data or (what I liked a lot) a blend of your activity stream with your messaging. This is how it works:
Muse in a Public Cloud setting
Of course the devil sits in the details: script libraries, UI components, authentication and application engine need to be tuned to work together with proper caching and a scalable (both in device and user base) manner.
Your average IBM seller will not know about the offering, you need yo find the right Distinguished Engineer and his Wing man.

Posted by on 12 November 2014 | Comments (0) | categories: IBM IBM - Lotus

Enterprise architecture - from Silos to Layers

In a recent discussion with a client the approaches and merits of Enterprise Architecture took center stage. IBM for a very long time proposed SOA (service oriented architecture) which today mostly gets implemented in a cloud stack. While it looks easy from high enough above, the devils is in the details. Mostly in the details how to get there. The client had an application landscape that was segmented along a full stack development platform with little or no interaction between the segments or silos:
Silo based Enterprise Architecture
The challenges they are facing are:
  • No consistency in user experience
  • Difficult to negotiate interfaces point-to-point
  • No development synergies
  • Growing backlog of applications
In the discussion I suggested to first have a look at adoption all principles to successfully pass the Spolsky test. Secondly transform their internal infrastructure to be cloud based, so when need arises workloads could easily be shifted to public cloud providers. The biggest change would be to flip the silos and provide a series of layers that are used across the technologies. A very important aspect in the layer architecture is the use of Design by contract principles. The inner workings of the layer are, as much as sensible, hidden by an API contract. So when you e.g. retrieve a customer information, it could come from SAP, Notes, RDBMS or NoSQL, you wouldn't know and you wouldn't care.

Read more

Posted by on 05 November 2014 | Comments (1) | categories: Software