Search

Mobile tag

About Me

I am the "IBM Collaboration & Productivity Advisor" for IBM Asia Pacific. I'm based in Singapore.
Reach out to me via:
Follow notessensei on Twitter
(posts)
Skype
Sametime
IBM
Facebook
LinkedIn
XING
Amazon Store
Amazon Kindle

Twitter

Domino Upgrade

VersionSupport end
5.0
6.0
6.5
7.0
8.0
8.5
Upgrade to 9.x now!
(see the full Lotus lifcyle) To make your upgrade a success use the Upgrade Cheat Sheet.
Contemplating to replace Notes? You have to read this! (also available on Slideshare)

Languages

Other languages on request.

Visitors

Useful Tools

Get Firefox
Use OpenDNS
The support for Windows XP has come to an end . Time to consider an alternative to move on.
StopTheSecrecy

21/07/2014

Warriors of Light

Inspired by Paulo Coelho's manual for the Warrior of the Light:

Warriors of Light
We were born from the stars
Descended from the heavens
Armed with compassion
Determined to end the suffering
Subjected to the human condition
Battling ignorance with wisdom
Laying our lives for the liberation from illusion

When you look in the mirror - remember!
You are one of us.

17/07/2014

Adventures with vert.x, 64Bit and the IBM Notes client

The rising star of web servers currently is node.js, not at least due to the cambrian explosion in available packages with a clever package management system and the fact that "Any application that can be written in JavaScript, will eventually be written in JavaScript" (according to Jeff Atwood).
When talking to IBM Domino or IBM Connections node.js allows for very elegant solutions using the REST APIs. However when talking to a IBM Notes client, it can't do much since an external program needs to be Java or COM, the later on Windows only.
I really like node.js event driven programming model, so I looked around. In result I found vert.x, which does to the JVM, what node.js does to Google's v8 JS runtime. Wikipedia decribes vert.x as "a polyglot event-driven application framework that runs on the Java Virtual Machine ". Vert.x is now an Eclipse project.
While node.js is tied to JavaScript, vert.x is polyglot and supports Java, JavaScript, CoffeeScript, Ruby, Python and Groovy with Scala and others under consideration.
Task one I tried to complete: run a verticle that simply displays the current local Notes user name. Of course exploring new stuff comes with its own set of surprises. As time of writing the stable version of vert.x is 2.1.1 with version 3.0 under heavy development.
Following the discussion, version 3.0 would introduce quite some changes in the API, so I decided to be brave and use the 3.0 development branch to explore.
The fun part: there is not much documentation for 3.x yet, while version 2.x is well covered in various books and the online documentation.
vert.x 3.x is at the edge of new and uses Lamda expressions, so just using Notes' Java 6 runtime was not an option. The Java 8 JRE was due to be installed. Luckily that is rather easy.
The class is rather simple, even after including Notes.jar, getting it to run (more below) not so much:
package com.notessensei.vertx.notes;

import io.vertx.core.Handler;
import io.vertx.core.Vertx;
import io.vertx.core.http.HttpServerOptions;
import io.vertx.core.http.HttpServerRequest;
import io.vertx.core.http.HttpServerResponse;

import java.io.IOException;

import lotus.domino.NotesFactory;
import lotus.domino.NotesThread;
import lotus.domino.Session;

public class Demo {
	public static void main(String[] args) throws IOException {
		new Demo();
		int quit = 0;
		while (quit != 113) { // Wait for a keypress
			System.out.println("Press q<Enter> to stop the verticle");
			quit = System.in.read();
		}
		System.out.println("Veticle terminated");
		System.exit(0);
	}

	private static final int listenport = 8111;

	public Demo() {
		Vertx vertx = Vertx.factory.createVertx();
		HttpServerOptions options = new HttpServerOptions();
		options.setPort(listenport);
		vertx.createHttpServer(options)
				.requestHandler(new Handler<HttpServerRequest>() {
					public void handle(HttpServerRequest req) {
						HttpServerResponse resp = req.response();
						resp.headers().set("Content-Type",
								"text/plain; charset=UTF-8");
						StringBuilder b = new StringBuilder();
						try {
							NotesThread.sinitThread();
							Session s = NotesFactory.createSession();
							b.append(s.getUserName());
							NotesThread.stermThread();
						} catch (Exception e) {
							e.printStackTrace();
							b.append(e.getMessage());
						}
						resp.writeStringAndEnd(b.toString());
					}
				}).listen();
	}
}

Starting the verticle looked promising, but once I pointed my browser to http://localhost:8111/ the fun began.

Read More

14/07/2014

Cycle where?

I like to cycle, I do that often and from time to time I have fun with traffic participants. One of the interesting challenges are multi-lane crossings (note to my overseas readers: Singapore follows the British system of driving on the left, so cyclists are supposed to cycle on the left edge of the road - which makes me edgy in some situations. So for right driving countries, just flip the pictures) where the outer lane allows more than one direction. Like these:

Empty road
Road rules do require the bike to stay on the left (I didn't find the cycle symbol in SmartDraw, so I used a motorbike, just picture me on a bike there).
Staying right
What then happens, I've seen that quite often, is a vehicle closing up next to the cyclist. After all the reason why we stay on the left is not to obstruct other traffic. But that's Roadkill waiting to happen. I've been kicked off a motorbike once (got my shoulder dislocated) because a car felt it is ok to turn right when I was right of it. So I'm sensitive to this problem:
Roadkill waiting
As a result, when there's more than one lane in the direction I need to go and there is ambiguity where traffic in the same lane might be going, I make sure, that it won't happen by occupying the same space as "the big boys". I however pick my trajectory in a way that I end up at the left edge once I cleared the crossing:
Learning new Hokkien
Unsurprisingly some of the motorist aren't happy, after all they loose 2-3 seconds on their journey. So I wonder: is there a better way? Is that behaviour compliant with traffic rules? What do all these rude sounding Hokkien terms mean I do hear?

12/07/2014

From XML to JSON and back

In the beginning there was csv and the world of application neutral (almost) human readable data formats was good. Then unrest grew and the demand for more structure and contextual information grew. This gave birth to SGML (1986), adopted only by a few initiated.
Only more than a decade later (1998) SGML's offspring XML took centre stage. With broad support for schemas, transformation and tooling the de facto standard for application neutral (almost) human readable data formats was established - and the world was good.
But bracket phobia and a heavy toll, especially on browsers, for processing led to to rise of JSON (starting ca. 2002), which rapidly became the standard for browser-server communication. It it native to JavaScript, very light compared to XML and fast to process. However it lacks (as time of writing) support for an approved schema and transformation language (like XSLT).
This leads to the common scenario that you need both: XML for server to server communication and JSON for in-process and browser to server communication. While they look very similar, XML and JSON are different enough to make transition difficult. XML knows elements and attributes, while JSON only knows key/value pairs. A JSON snippet like this:
{ "name" : "Peter Pan",
  "passport" : "none",
  "girlfriend" : "Tinkerbell",
  "followers" : [{"name" : "Frank"},{"name" : "Paul"}]
}

can be expressed in XML in various ways:
<?xml version="1.0" encoding="UTF-8"?>
<person name="Peter Pan" passport="none">
    <girfriend name="Tinkerbell" />
    <followers>
        <person name="Frank" />
        <person name="Paul" />
    </followers>
</person>

.or.
<?xml version="1.0" encoding="UTF-8"?>
<person>
    <name>Peter Pan</name>
    <passport>none</passport>
    <girfriend>
        <person>
            <name>Tinkerbell</name>
        </person>
    </girfriend>
    <followers>
        <person>
            <name>Frank</name>
        </person>
        <person>
            <name>Paul</name>
        </person>
    </followers>
</person>

.or.
<?xml version="1.0" encoding="UTF-8"?>
<person name="Peter Pan" passport="none">
    <person name="Tinkerbell" role="girfriend" />
    <person name="Frank" role="follower" />
    <person name="Paul" role="follower" />
</person>

(and many others) The JSON object doesn't need a "root name", the XML does. The other way around is easier: each attribute simply becomes a key/value pair. Some XML purist see attributes as evil, I think they do have their place to make relations clearer (is-a vs. has-a) and XML less verbose. So transforming back and forth between XML and JSON needs a "neutral" format. In my XML Session at Entwicklercamp 2014 I demoed how to use a Java class as this neutral format. With the help of the right libraries, that's flexible and efficient.
Read More

09/07/2014

The folly of root cause analysis

IT support's dealing with management is a funny business. Whenever something goes wrong, support teams engage in "defensive blaming" and the elusive quest for a root cause.
I've seen this quest (and blaming god and country along the way if it doesn't appear) taking priority over problem resolution and prevention. The twisted thought is: "If I'm not sure about the (single) root cause, I can't neither fix nor prevent it from happening again".

Why is that a folly?
  • It paralyses: If a person bleeding enters an ER, the first call to action is to stop the bleeding. Asking: Have all the suppliers of the manufacturer of the blade that caused it have ISO 9000 certifications? Where was the ore mined to make the blade? Root cause analysis is like that. IT support however is an ER room
  • It is not real: There is never a single reason. Example: Two guys walk along the street in opposite directions. One is distracted, because he is freshly in love. The other is distracted because he just was dumped. They run into each other and bump their heads. What is the root cause for it?
    You remove a single factor: route chosen, fallen in love, being dumped, time of leaving the house, speed of walking, lack of discipline to put attention ahead of emotion etc. and the incident never would have happened.
    If I apply an IT style root cause analysis it will turn out: it was the second guy's grandfather who is the root cause: He was such a fierce person, so his son never developed trust, which later in the marriage led to a breakup while guy two was young, traumatising every breakup for him - thus being distracted
  • There is more than one: as the example shows: removing one factor could prevent the incident to happen, but might leave a instable situation. Once a root cause has been announced, the fiction spreads: "everything else is fine". Example: A database crashes. The root cause gets determined: only 500 users can be handled, so a limit is introduced. However the real reason is a faulty hardware controller that malfunctions when heating up, which happens on prolonged high I/O (or when the data center aircon gets maintained).
    The business changes and the application gets used heavier by the 500 users, crossing the temperature threshold and the database crashes again.
  • Cause and effect are not linear: Latest since Heisenberg it is clear that the world is interdependent. Even the ancient Chinese knew that. So it is not, as Newton discovered, a simple action/reaction pair (which suffers from the assumption of "ceteris paribus" which is invalid in dynamic systems), but a system of feedback loops subject to ever changing constraints.
Time is better spend understanding a problem using system thinking: Assess and classify risk factors and problem contributors. Map them with impact, probability, ability to improve and effort to action. Then go and move the factors until the problem vanishes. Don't stop there, build a safety margin.
As usual YMMV

04/07/2014

Documents vs eMails

With a public sector customer I had an interesting discussion on non-repudiation, messaging and regulatory control. We were discussing how to ensure awareness of information that has behavioural or legal consequences. While "I didn't know" is hardly a viable defence, relying on the other party to keep themselves updated is just asking for trouble. In a collaborative environment, where a regulator sees itself primarily as the facilitator of orderly conduct and only as policing the conduct as secondary mission, this is inefficient.
An efficient way is a closed loop system of information dissemination and acknowledgement. The closed loop requirement isn't just for regulators, but anybody that shares information resulting in specific behaviour. Just look at the communication pattern of a pilot with air traffic control (paraphrased): Tower: "Flight ABC23 turn to runway 270, descend to 12 thousand feet" - Pilot: "Roger that, turning to 270, descent to 12 thousand"

When we look at eMail, the standard mechanism that seems to get close to this pattern are Return Receipts:
Standard eMail flow
Using RFC3798 - Message Disposition Notification - MDN (commonly referred to as Return Receipt) to capture the "state of acknowledgement", is a folly.
  1. the RFC is completely optional and a messages system can have it switched off (or delete it from the outbox), so it isn't suitable as guarantee
  2. MDN only indicates that a message has been opened. It does not indicate: was it read, was it understood, were the actions understood, was the content accepted (the later one might not be relevant in a regulatory situation). It also doesn't - which is the biggest flaw - indicate what content was opened. If the transmission was incomplete, damaged or intercepted a return receipt wouldn't care.
So some better mechanism is needed!
Using documents that have a better context a closed loop system can be designed. When I say "document" I don't mean: propriety binary or text format file sitting in a file system, but entity (most likely in a database) that has content and meta information. The interesting part is the meta information:
  • Document hierarchy: where does it fit in. For a car manufacturer recalling cars that could be: model, make, year. For a legislator the act and provisions it belongs in
  • Validity: when does it come into effect (so one can browse by enactment date), when does (or did) it expire
  • History: which document(s) did it supersede, which document(s) superseded it
  • Audience: who needs to acknowledge it and how fast. Level of acknowledgement needed (simple confirmation or some questionnaire)
  • Pointer to discussion, FAQ and comments
  • Tags
An email has no structured way to carry such information forward. So a document repository solution is required. On a high level it can look like this:
Document Flow to acknowledge
Messaging is only used for notification of the intended audience. Acknowledgement is not an automatic, but a conscious act of clicking a link and confirming the content. The confirmation would take a copy of original text and sign it, so it becomes clear who acknowledged what. An ideal candidate would be XML Signature, but there isn't a model how to sign that from a browser. There is an emerging w3C standard for browser based Crypto, that has various level of adoption: Once you have dedicated records who has acknowledged a document, you can start chasing, reliable and automated, the missing participants and, if you are a regulator, take punitive actions when chasing fails. It also opens the possibility to run statistics how fast what type of documents get adopted.
The big BUT usually is: I don't want to deploy additional 2 servers for document storage and web access. The solution for that is, you might have guessed it, is Domino. One server can provide all 3 roles easily:
Document Flow on a Domino server
As usual YMMV

17/06/2014

The taxi loyalty program isn't working and how to fix it

Singapore is a little like New York: train and taxis are a mainstay of the daily commute. So the taxi market is highly regulated and fiercely competitive. As no surprise taxi companies try to bind customers before they loyalty switches to alternative bookings or the disruptors.
So Comfort & CityCab started CabRewards. After all loyalty cards work well for their inventor.
In a smart move, instead of creating a new piece of plastic, Comfort teamed up with ezLink Singapore's leading provider of cash cards. Everyone in Singapore has a ezLink card, since they are used for train access and road tolls.
From there it all went downhill.
In Usability studies one of they key activities is to watch users and refine on their feedback. So I loaded my ezLink card, to see how Taxi drivers will handle it. In (so they told me, after I inquired with Comfort) an attempt (a futile one) to make things "consistent", the designer decided to add a prompt into the touchscreen application "Cabrewards Yes/No" before the driver can process payment. So something the driver has no benefit from stands between him and his livelihood. To no surprise, 95% of the drivers don't bother to ask "Are you a CabRewards member?", even when I announce "I will pay by ezLink". They just click NO and process to payment. If they would answer yes, they had to tap once for the points and another time for the actual payment. They have no benefit, so they skip it (and rightly so, their job is driving, not administration of loyalty programs).
I asked Comfort and they explained, I could ask the Taxi driver to switch back and add the points - but I'm not their program admin either. So how to fix this?
Plan, Reality and Fix
The graphic above, shows the intended workflow, the actual workflow and a possible fix. There are several touchpoints, where an automated IT system can determine if I'm a known passenger and a CabRewards member: I call a cab, I uase a mobile app to get a cab or I pay with a registered method of payment (ezLink, Nets, CreditCard). In all of these cases the taxi driver doesn't need to be bothered with a question. Most likely that also would cover most reward members. The ones paying cash might anyway not have registered for the program - or a simple change of terms (points only with electronic payments) would even completely eliminate the question. So far I haven't seen a change. I wonder if the person in charge of the process is trying to cover up the problem?
I would say: It is OK to have an idea, even if problems arise, just fix them. Despite other opinion there is no such thing as a "honest mistake". There is trial, error and correction (and start over).
Simplicity after all isn't simple
Comfort, you can do better!

30/05/2014

Let's ditch IBM Notes and Domino

Finally you decided it is time to move on, legacy no longer means "tried and tested" but "we need to move one" to you. After all you never really liked Cher.
Notes data is available via LotusScript, dotNet (through the COM bridge), in Java, Corba, C++, XML, REST, MIME, so how hard can it be? Actually not very hard, you just need to:
  1. Find sutiable replacement application platform(s)
  2. Rewrite the applications (don't dream: there is no such thing as "migrate an app")
  3. Migrate your users
  4. Migrate your data
... and then live happily ever after. Probably the easiest part is migrating the users, after all directories are more or less the same when you look at the LDAP side of them.
Migrating the users' encryption keys and secured documents could be considered "an application function", so you dodge a potential problem here.
Getting the data out, short of that pesky RichText content (yeah, the one with sections and embedded OLE controls), is easy - never mind reader and author access protection.
The transformation into the target format happens in a black box (the same the local magician uses for his tricks), you buy from a service provider, so that's covered, leaves platform and applications. (Bring your own <irony /> tags)
Lets have a look at the moving parts (Listing OpenSource components only, you can research commercial ones on your own):
What Domino does for you
  • Directory : OpenLDAP comes with most Linux distributions and is well understood. It doesn't take you down the propriarty extension hell (and you can remove obsolete schema parts) - it requires an RDBMS to run (not a blue one)
  • PKI : PrimeKey
  • Sync Server : For the database you could get away with the database, but you need one for mobile: Z-Push
  • Document Server : Pick any - or go with: Dropbox, Box, Copy etc.
  • Database Server : Notes is the mother of NoSQL, so you want to pick one of those: CouchDB, OrientDB or you go with DB/2 PureXML
  • Application Server : XPages is full fledged JSF, you you might opt for TomEE, but you will need some extra libraries. Or you ditch Java and adopt the cool kid on the block
  • Mail Server : the venerable SendMail does, as the name implies, send mail. Goes nicely with Mozilla Thunderbird
  • Web Server : That's easy. There's NGinX or Apache
  • Enterprise Integration : Might be quite tricky. Domino's DECS and LEI shuffle data from all sorts of sources. OpenESB might do the trick
  • Task scheduling : You can use OS level CRON (Just find a nice UI) or application level Quartz
Keep in mind, this replaces one server - Domino. Each of the tools needs to be:
  1. configured
  2. hardened
  3. populated with data
The ease of clustering you had with Domino, gets replaced with different methods and capabilities for each of the tools. Today Cloud is the new infrastructure, so you might get lucky with someone else configuring all of the above for you.

Once you got all the moving parts in place, you need to redevelop your apps. Don't be fooled, but run the analysis, to see the full magnitude of that task ahead.
As usual YMMV

12/05/2014

Value, Features and Workflows

In sales school we are taught to sell value. Initially that approach was designed to defang the threat of endless haggling over price, but it took an extra twist in the software industry. Since software companies rely on user's desire to "buy the next version" to secure revenue from maintenance and upgrade sales, a feature war was the consequence.
As a result, buyers frequently request feature comparison tables, driving the proponents of "value & vision" up the wall. It also creates tension inside a sales organisation, when a customer asks for a specific feature and the seller is reprimanded for "not selling value". How can this split in expectations be reconciled? As learned in Negotiation Basics, we need to step back and see beyond positions at the interest that drives them:
The seller doesn't want to do feature to feature comparisons, since they never match and are time consuming and tedious. On the other hand showing how trustworthy, visionary and future-prove the product is, makes creating confidence much easier.
The buyer is very aware, that software doesn't have any inherit value, but only its application has. Using software requires invoking its features, so the feature comparison is a proxy for the quality of workflows it can provide in the buyer organisation. The challenge here is that any change in feature set will break somebody's workflow.
An example:
Outlook users quite often add themselves as BCC into an outgoing eMail, so the message appears in the inbox again. From there it is dragged into a folder in an archive, so it is kept in the local, not size restrained PST file where it can be found. IBM Notes doesn't allow to drag from the inbox into a specific folder in an archive. The equivalent workflow: The user doesn't need to add herself to BCC, but simply uses Send & File on the original message. The automatic scheduled Archive task moves the message later without user action required. The searchbox will find messages regardless their location in the main mail file or in one of the archives - same result, IMHO less work - but a different flow.
The solution to this is consultative selling, where a seller looks at the workflows (that are mapped to features of existing tools or practises) and proposes improved workflows based on the feature set of his products and services. A nice little challenge arises, when the flow isn't clear or the proposed product has no advantage.
A little story from the trenches to highlight this: Once upon the time, when files still were mainly paper, a sales guy tried to sell one of my customers a fax server, stating that having the fax on screen, thus eliminating the need to walk to the fax machine, would be really beneficial. He looked quite dumbfounded when the manager asked: "And how do I write on this?". The manager's workflow was to scribble instructions onto incoming faxes or document that it had been acted upon. The software couldn't do that.

In conclusion: there is a clear hierarchy in software: To have a goal and destination, there needs to be a vision, that vision needs to be supported by software that has value in implementing this vision. Value is generated by supporting and improving workflows by that software. Workflows use one or more features of the application. Comparing features is aiming one level too low, the workflows are the real value generators. A change in software most likely requires a change in workflow.

Quite a challenge!

17/04/2014

You want to move to Domino? You need a plan!

Cloud services are all en vogue, the hot kid on the block and irressitible. So you decided to move there, but you decided your luggage has to come along. And suddenly your realize, that flipping a switch won't do the trick. Now you need to listen to the expert.
The good folks at Amazon have compiled a table that gives you some idea how much it would take to transfer data:
Available Internet Connection Theoretical Min. Number of Days
to Transfer 1TB
at 80% Network Utilization
TB
T1 (1.544Mbps) 82 days
10Mbps 13 days
T3 (44.736Mbps) 3 days
100Mbps 1 to 2 days
1000Mbps Less than 1 day Some stuff for your math
(Reproduced without asking)
Talking to customers gung ho to move, I came across data volumes of 10-400 TB. Now go and check your pipe and do the math. A big bang, just flip the switch migration is out of the picture. You need a plan. Here is a cheat sheet to get you started:
  1. Create a database that contains all information of your existing users and how they will be once provisioned on Domino (If you are certified for IBM SmartCloud migrations IBM has one for you)
  2. Gather intelligence on data size and connection speed. Design your daily batch size accordingly
  3. Send a message to your existing users, where you collect their credentials securely and offer them a time slot for their migration. A good measure is to bundle your daily slots into bigger units of 3-7 days, so you have some wiggle room. Using some intelligent lookup you only present slots that have not been taken up
  4. Send a nice confirmation message with date and steps to be taken. Let the user know, that at cut-over day they can use the new mail system instantly, but it might take a while (replace "while" with "up to x hours" based on your measurements and the mail size intelligence you have gathered) before existing messages show up
  5. When the mailbox is due, send another message to let the user kick off the process (or confirm her consent that it kicks off). In that message it is a good idea to point to learning resources like the "what's new" summary or training videos or classes
  6. Once the migration is completed, send another message with some nice looking stats and thanking for your patience
  7. Communicate, Communicate, Communicate!
The checklist covers the user facing part of your migration. You still have to plan DNS cut-over, routing while moving, https access, redirection on mail links etc. Of course that list also applies for your pilot group/test run.
As usual: YMMV

Disclaimer

This site is in no way affiliated, endorsed, sanctioned, supported, nor enlightened by Lotus Software nor IBM Corporation. I may be an employee, but the opinions, theories, facts, etc. presented here are my own and are in now way given in any official capacity. In short, these are my words and this is my site, not IBM's - and don't even begin to think otherwise. (Disclaimer shamelessly plugged from Rocky Oliver)
© 2003 - 2014 Stephan H. Wissel - some rights reserved as listed here: Creative Commons License
Unless otherwise labeled by its originating author, the content found on this site is made available under the terms of an Attribution/NonCommercial/ShareAlike Creative Commons License, with the exception that no rights are granted -- since they are not mine to grant -- in any logo, graphic design, trademarks or trade names of any type. Code samples and code downloads on this site are, unless otherwise labeled, made available under an Apache 2.0 license. Other license models are available on written request and written confirmation.