Search

Mobile tag

About Me

I am the "IBM Collaboration & Productivity Advisor" for IBM Asia Pacific. I'm based in Singapore.
Reach out to me via:
Follow notessensei on Twitter
(posts)
Skype
Sametime
IBM
Facebook
LinkedIn
XING
Amazon Store
Amazon Kindle

Twitter

Domino Upgrade

VersionSupport end
5.0
6.0
6.5
7.0
8.0
8.5
Upgrade to 9.x now!
(see the full Lotus lifcyle) To make your upgrade a success use the Upgrade Cheat Sheet.
Contemplating to replace Notes? You have to read this! (also available on Slideshare)

Languages

Other languages on request.

Visitors

Useful Tools

Get Firefox
Use OpenDNS
The support for Windows XP has come to an end . Time to consider an alternative to move on.
StopTheSecrecy

02/09/2014

Rethinking the MimeDocument data source

Tim (we miss you) and Jesse had the idea to store beans in Mime documents, which became an OpenNTF project.
I love that idea and was musing how to make it more "domino like". In its binary format, a serialized bean can't be used for showing view data, nor can one be sure that it can be transported or deserialized other than through the same class version as the creator (this is why Serialized wants to have a serialid).
With a little extra work, that becomes actually quite easy: Enter JAXB. Serializing a bean to XML (I hear howling from the JSON camp) allows for a number of interesting options:
  • The MIME data generated in the document becomes human readable
  • If the class changes a litte de-serialization will still work, if it changes a lot it can be deserialized to an XML Document
  • Values can be extracted using XPath to write them into the MIME header and/or regular Notes items - making it accessible for use in views
  • Since XML is text, full text search will capture the content
  • Using a stylesheet a fully human readable version can be stored with the original MIME (good to eMail)
I haven't sorted out the details, but lets look at some of the building blocks. Who ever has seen me demo XPages will recognize the fruit class. The difference here: I added the XML annotations for a successful serialization:
package test;

import javax.xml.bind.annotation.XmlAccessType;
import javax.xml.bind.annotation.XmlAccessorType;
import javax.xml.bind.annotation.XmlAttribute;
import javax.xml.bind.annotation.XmlElement;
import javax.xml.bind.annotation.XmlRootElement;

@XmlRootElement(name = "Fruit", namespace = "http://www.notessensei.com/fruits")
@XmlAccessorType(XmlAccessType.NONE)
public class Fruit {
    @XmlAttribute(name = "name")
    private String  name;
    @XmlElement(name = "color")
    private String  color;
    @XmlElement(name = "taste")
    private String  taste;
    @XmlAttribute(name = "smell")
    private String  smell;
    
    public String getSmell() {
        return this.smell;
    }

    public void setSmell(String smell) {
        this.smell = smell;
    }

    public Fruit() {
        // Default constructor
    }

    public Fruit(final String name, final String color, final String taste, final String smell) {
        this.name = name;
        this.color = color;
        this.taste = taste;
        this.smell = smell;
    }

    public final String getColor() {
        return this.color;
    }

    public final String getName() {
        return this.name;
    }

    public final String getTaste() {
        return this.taste;
    }
    
    public final void setColor(String color) {
        this.color = color;
    }

    public final void setName(String name) {
        this.name = name;
    }

    public final void setTaste(String taste) {
        this.taste = taste;
    }
}

The function (probably in a manager class or instance) to turn that into a Document is quite short. The serialization of JAXB would allow to directly serialize it into a Stream or String, but we need the XML Document step to be able to apply the XPath.
public org.w3c.dom.Document getDocument(Fruit fruit) throws ParserConfigurationException, JAXBException {
		DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance();
		DocumentBuilder db = dbf.newDocumentBuilder();
		org.w3c.dom.Document doc = db.newDocument();
		JAXBContext context = JAXBContext.newInstance(fruit.getClass());
		Marshaller m = context.createMarshaller();
		m.setProperty(Marshaller.JAXB_FORMATTED_OUTPUT, Boolean.TRUE);
		m.marshal(fruit, doc);
		return doc;
	}

The little catch here: There is a Document in the lotus.domino package as well as in the org.w3c.dom package. You need to take care not to confuse the two. Saving it into a document including the style sheet (to make it pretty) is short too. The function provides the stylesheet as a w3c Document and the list of fields to extract as a key (the field) - value (the XPath) map. Something like this:

Read More

02/09/2014

Bikepad SmartPhone mount review

This is my field impression of the Bikepad SmartPhone mount having used it for a few weeks on my Montague Paratrooper pro

TL:TRThe Bikepad is a highly functional accessory to keep your phone on your bike fully functional. Is has quality craftsmanship and a sleek design. If I had an editor-refuses-to-give-it-back award to give (I actually paid for it), I would award it.

I do cycle for longer durations and some rough spots, so I like to keep a phone in reach. Not at last to keep SWMBO updated. When I learned about Bikepad and their claim "Basically the Bikepad creates a vacuum between its surface and the device. The vacuum is strong enough to hold the device" I had to give it a try. Here's the verdict:
  • The Good
    Works as designed. The surface indeed creates a gecko feet like suction that firmly holds the phone in place. You actually need some force to pull it our. The aluminium base is sleek and solidly build. I like the minimal design: everything is there for it to function and nothing more, form follows function at its best. The half pipe shaped aluminium connector can be easily fixed on a stem (I didn't try handlebar mount) and secured with a Velcro. It comes with a foam pipe segment to adjust to different pipe diameters.
    I tried to shake the phone off in off-road or city conditions, including falling off the bike and hitting the ground (that part unplanned), but it stayed nice in place.
  • The Bad
    Works as designed. For the vacuum to build a close contact between phone and surface is needed. For your iShiny® (that was my main test unit) or a Nexus 4 (the other phone) that isn't a problem. For anything that doesn't have a flat surface or buttons on the back, you need to test it. Also the various phone cases need to be checked carefully. I tested a Otterbox case, which has a ridge running around the back. This prevents the case body from contact with the pad. Only the ridge has contact, which provides not enough suction
  • and The Ugly
    When it rains it pours. If the pad gets wet, it gets slippery and looses its suction. When the phone already sticks on it and it is rained on, it gradually will loose the grip. Luckily the pad comes with a little shower cap rain cover. With the cover it looks a little funny, not as cool anymore - but it does the job. Anyway you wouldn't want to expose your iShiny® to the bare elements. Another little challenge: my stem is quite thick, so the provided foam pipe is too think to squeeze between stem and half pipe. I was left with a little cutting exercise or alternative means. I opted for Sugru that holds everything in place
To see more, check out some of their videos. In summary: a keeper.

14/08/2014

Long Term Storage and Retention

Not just since Einstein time is relative. For a human brain anything above 3 seconds is long term. In IT this is a little more complex.

Once a work artefact is completed, it runs through a legal vetting and it either goes to medium or long term storage. I'll explain the difference in a second. This logical flow manifests itself in multiple ways in concrete implementations: Journaling (both eMail and databases), archival, backups, write-once copies. Quite often all artifacts go to medium term storage anyway and only make it into long term storage when the legal criteria are met. Criteria can be:
  • Corporate & Trade law (e.g. the typical period in Singapore is 5 years)
  • International law
  • Criminal law
  • Contractual obligations (E.g. in the airline industry all plane related artefacts need to be kept at least until the last of that plane family has retired. E.g. the Boing 747 family is in service for more than 40 years)
For a successful retention strategy three challenges need to be overcome:
  1. Data Extraction

    When your production system doesn't provide retention capabilities, how to get the data out? In Domino that's not an issue, since it does provide robust storage for 25 years (you still need to BACKUP data). However if you want a cross application solution, have a look at IBM's Content Collector family of products (Of course other vendor's have solutions too, but I'm not on their payroll)
  2. Findability

    Now an artifact is in the archive, how to find it? Both navigation and search need to be provided. Here a clever use of Meta data (who, what, when, where) makes the difference between a useful system and a Bit graveyard. Meta data isn't an abstract concept, but the ISO 16684-1:2012 standard. And - YES - it uses the Dublin core, not to confuse with Dublin's ale
  3. Consumability / Resillience

    Once you found an artifact, can you open and inspect it. This very much boils down to: do you have software that can read and render this file format?
The last item (and the second to some extend) make the difference between mid-term and long-term storage. In a mid-term storage system you presume that, short of potential version upgrades, your software landscape doesn't change and the original software is still actively available when a need for retrieval arises. Furthermore you expect your retention system to stay the same.
On the other hand, in a long-term storage scenario you can't rely on a specific software for either search or artifact rendering. So you need to plan a little more carefully. Most binary formats fall short of that challenge. Furthermore your artefacts must be able to "carry" their meta data, so a search application can rebuild an access index when needed. That is one of the reasons why airline maintenance manuals are stored in DITA rather than an office format (note: docx is not compliant to ISO/IEC 29500 Strict).
The problem domain is known as Digital Preservation and has a reference implementation and congressional attention.
In a nutshell: keep your data as XML, PDF/A or TIFF. MIME could work too, it is good with meta data after all and it is native to eMail. The MIME-Trap to avoid are MIME-parts that are propriety binary (e.g. your attached office document). So proceed with caution
Neither PST, OST nor NSF are long term storage suitable (you still can use the NSF as the search database)
To be fully sure a long term storage would retain the original format (if required) as well as a vendor independent format.

Read More

14/08/2014

Time stamped encrypted archives

Developers use Version Control, business users Document management and consultants ZIP files.
From time to time I feel the need to safeguard a snapshot in time outside the machine I'm working with. Since "storage out of my control" isn't trustworthy, I encrypt data. This is the script I use:
#!/bin/bash
############################################################################
# Saves the given directory (%1) in an SSL encrypted zip file (%2) within
# the personalFiles folder. The name of the ZIP file needs to be without zip
# extension but might already contain the date. Destination might be %3
############################################################################
# Adjust these three values to your needs. Don't use ~ otherwise it doesn't
# work when you use sudo
tmplocation=/home/user/temp/
keyfile=/home/user/.ssh/pubkey.pem
privatekey=/home/user/.ssh/privkey.pem
if [ -z "$3" ]
  then
    secureloction=./
else
    secureloction=$3
fi
fullzip=$tmplocation$2.zip
fulldestination=$secureloction$2.szip
securesource=$1

#If the final file exists we unencrypt it first to update it
if [ -f "${fulldestination}" ]
then
    echo "Decrypting ${fulldestination}..."
    openssl smime -decrypt -in "${fulldestination}" -binary -inform DEM -inkey $privatekey -out "${fullzip}"
    # Zip the directory
    echo "Updating from ${securesource}"
	zip -ru $fullzip $securesource
else
    echo "Creating from ${securesource}"
	zip -r $fullzip $securesource
fi

#Encrypt it
echo Encrypting $fulldestination
openssl smime -encrypt -aes256 -in $fullzip -binary -outform DEM -out $fulldestination $keyfile
#Remove the temp file
shred -u $fullzip
notify-send -t 1000 -u low -i gtk-dialog-info "Secure backup completed: ${fulldestination}"

To make that work, you need Encryption keys, you can create yourself. A typical script to call the script above would look like this:
#!/bin/bash
############################################################################
# Save the Network connections from /etc/NetworkManager/system-connections
# in an SSL encrypted zip file
############################################################################
securesource=/etc/NetworkManager/system-connections
#Save one version per day
now=$(date +"%Y%m%d")
#Save one version per month
#now=$(date +"%Y%m")
zipfile=networkconnections_$now
secureloction=/home/user/allmyzips/
zipAndEncrypt $securesource $zipfile $secureloction

When you remove the decryption part (one time creation only, no update), you would only need to have access to the public key, which you could share, so someone else can provide you with a zip file encrypted just for you.
As usual: YMMV.

08/08/2014

Designing a REST API for eMail

Unencumbered by standards designed by committees I'm musing how a REST API would look like.
A REST API consists of 3 parts: the URI (~ URL for browser access), the verb and the payload. Since I'm looking at browser only access, the structured data payload format clearly will be JSON with the prose payload delivered in MIME format. I will worry about calendar and social data later on.
The verbs in REST are defined by the HTTP standard: , PUT, and DELETE. My base url would be http://localhost:8888/email and then continue with an additional part. Combined with the 4 horsemen verbs I envision the following action matrix:
Read More

06/08/2014

Running vert.x with the OpenNTF Domino API

In the first part I got vert.x 3.0 running with my local Notes client. The mastered challenges there were 32 Bit Java for the Notes client and the usual adjustment for the path variables. The adoption of the OpenNTF Domino API required a few steps more:
  1. Set 2 evironment variables:
    DYLD_LIBRARY_PATH=/opt/ibm/notes
    LD_LIBRARY_PATH=/opt/ibm/notes
  2. Add the following parameter to your Java command line:
    -Dnotes.binary=/opt/ibm/notes -Duser.dir=/home/stw/lotus/notes/data -Djava.library.path=/opt/ibm/notes
    Make sure that it is one line only. (Of course you will adjust the path to your environment, will you?)
  3. Add 4 JAR files to the classpath of your project runtime:
    • /opt/ibm/notes/jvm/lib/ext/Notes.jar
    • /opt/ibm/notes/framework/rcp/eclipse/plugins/
      com.ibm.icu.base_3.8.1.v20080530.jar
    • org.openntf.domino.jar
    • org.openntf.formula.jar
    I used the latest build of the later two jars from Nathan's branch, so make sure you have the latest. The ICU plug-in is based on the International Components for Unicode project and might get compiled into a future version of the Domino API.
Now the real fun begins. The classic Java API is conceptually single threaded with all Domino actions wrapped into NotesThread.sinitThread(); and NotesThread.stermThread(); to gain access to the Notes C API. For external applications (the ones running neither as XPage/OSGi nor as agent, the OpenNTF API provides the Domino Executor.
Read More

24/07/2014

Workflow for beginners, Standards, Concepts and Confusion

The nature of collaboration is the flow of information. So naturally I get asked about Workflows and its incarnation in IT systems a lot. Many of the question point to a fundamental confusion what Worflow is, and what it isn't. This entry will attempt to clarify concepts and terminology
Wikipedia sums it up nicely: "A workflow consists of an orchestrated and repeatable pattern of business activity enabled by the systematic organization of resources into processes that transform materials, provide services, or process information. It can be depicted as a sequence of operations, declared as work of a person or group,[2] an organization of staff, or one or more simple or complex mechanisms".
Notably absent from the definition are: "IT system", "software", "flowchart" or "approval". These are all aspects of the implementation of a specific workflow system, not the whole of it. The Workflow Management Coalition (WfMC) has all the specifications, but they might appear as being written in a mix of Technobabble and Legalese, so I sum them up in my own words here:
  • A workflow has a business outcome as a goal. I personally find that a quite narrow definition, unless you agree: "Spring cleaning is serious business". So I would say: a collection of steps, that have been designed to be repeatable, to make it easier to achieve an outcome. So a workflow is an action pattern, the execution of a process. It helps to save time and resources, when it is well designed and can be a nightmare when mis-fitted
  • A workflow has an (abstract) definition and zero or more actual instances where the workflow is executed. Like: "Spring cleaning requires: vacuuming, wiping, washing" (abstract). "Spring cleaning my apartment on March 21, 2014" (actual). Here lies the first challenge: can - and how much - a workflow instance deviate from the definition. How have cases to be handled when the definition changes in the middle of the flow execution? How to handle workflow instances that do require more or less steps? When is a step mandatory for regulatory compliance?
  • A workflow has one or more actors. In a typical approval workflow the first actor is called requestor, followed by one or more approvers. But actors are not limited to humans and the act of approving or rejecting. A workflow actor can be a piece of software that adds information to a flow based on a set of criteria. A typical architecture for automated actors is SOA
  • Workflow systems have different magnitudes. The flagship products orchestrate flows across multiple independent systems and eventually across corporate boundaries, while I suspect that the actual bulk of (approval) flows runs in self contained applications, that might use coded flow rules, internal or external flow engines
  • On the other end of scale eMail can be found, where the flow and sequence are hidden in the heads of the participants or scribbled into freeform text
  • Workflows can be described in Use Cases, where the quality depends on the completeness of the description, especially the exception handling. A lot of Business Process Reengineering that is supposed to simplify workflows fails due to incomplete exception handling and people start to work "around the system" (eMail flood anyone?)
  • A workflow definition has a business case and describes the various steps. The number of steps can be bound by rules (e.g. "the more expensive, the more approvers are needed" or "if the good transported is HazMat approval by the environmental agency is needed") that get interpreted (yes/no) in the workflow instance
  • Determine the next actor(s) is a task that combines the workflow instance step with a Role resolver. That's the least understood and most critical part of a flow definition. Lets look at a purchase approval flow definition: "The requestor submits the request to approval by the team lead, to be approved by the department head for final confirmation by the controller". There are 4 roles to resolve. This happens in context of the request and the organisational hierarchy. The interesting question: if a resolver returns more than one person, what to do? Pick one randon, round robin or else how?
  • A role resolver can run on submission of a flow or at each step. People change roles, delegate responsibilities or are absent, so results change. Even if a (human) workflow step already has a person assigned, a workflow resolver is needed. That person might have delegated a specific flow, for a period (leave) or permanently (work load distribution). So Jane Doe might have delegated all approvals below sum X to her assistant John Doe (not related), but that doesn't get reflected in the flow definition, only in the role resolution
  • Most workflow systems gloss over the importance of a role resolver. Often the role resolver is represented by a rule engine, that gets confused with the flow engine. Both parts need to work in concert. We also find role resolution coded as tree crawler along an organisational tree. Role resolving warrants a complete post of its own (stay tuned)
  • When Workflow is mentioned to NMBU (normal mortal business users), then two impressions pop up instantly: Approvals (perceived as the bulk of flows) and graphical editors. This is roughly as accurate as "It is only Chinese food when it is made with rice". Of course there are ample examples of graphical editors and visualizations. The challenge: the shiny diagrams distract from role definitions, invite overly complex designs and contribute less to a successful implementation than sound business cases and complete exception awareness
  • A surprisingly novel term inside a flow is SLA. There's a natural resistance, that a superior (approver) might be bound by an action of a subordinate to act in a certain time frame. Quite often making the introduction of SLA part of a workflow initiative, provides an incentive to look very carefully to make processes complete and efficient
  • Good process definitions are notoriously hard to write and document. A lot of implementations suffer from a lack of clear definitions. Even when the what is clear, the why gets lost. Social tools like a wiki can help a lot
  • A good workflow system has a meta flow: a process to define a process. That's the part where you usually get blank stares
  • Read a one one or other good book to learn more
There is more to say about workflow, so stay tuned!

21/07/2014

Warriors of Light

Inspired by Paulo Coelho's manual for the Warrior of the Light:

Warriors of Light
We were born from the stars
Descended from the heavens
Armed with compassion
Determined to end the suffering
Subjected to the human condition
Battling ignorance with wisdom
Laying our lives for the liberation from illusion

When you look in the mirror - remember!
You are one of us.

17/07/2014

Adventures with vert.x, 64Bit and the IBM Notes client

The rising star of web servers currently is node.js, not at least due to the cambrian explosion in available packages with a clever package management system and the fact that "Any application that can be written in JavaScript, will eventually be written in JavaScript" (according to Jeff Atwood).
When talking to IBM Domino or IBM Connections node.js allows for very elegant solutions using the REST APIs. However when talking to a IBM Notes client, it can't do much since an external program needs to be Java or COM, the later on Windows only.
I really like node.js event driven programming model, so I looked around. In result I found vert.x, which does to the JVM, what node.js does to Google's v8 JS runtime. Wikipedia decribes vert.x as "a polyglot event-driven application framework that runs on the Java Virtual Machine ". Vert.x is now an Eclipse project.
While node.js is tied to JavaScript, vert.x is polyglot and supports Java, JavaScript, CoffeeScript, Ruby, Python and Groovy with Scala and others under consideration.
Task one I tried to complete: run a verticle that simply displays the current local Notes user name. Of course exploring new stuff comes with its own set of surprises. As time of writing the stable version of vert.x is 2.1.1 with version 3.0 under heavy development.
Following the discussion, version 3.0 would introduce quite some changes in the API, so I decided to be brave and use the 3.0 development branch to explore.
The fun part: there is not much documentation for 3.x yet, while version 2.x is well covered in various books and the online documentation.
vert.x 3.x is at the edge of new and uses Lamda expressions, so just using Notes' Java 6 runtime was not an option. The Java 8 JRE was due to be installed. Luckily that is rather easy.
The class is rather simple, even after including Notes.jar, getting it to run (more below) not so much:
package com.notessensei.vertx.notes;

import io.vertx.core.Handler;
import io.vertx.core.Vertx;
import io.vertx.core.http.HttpServerOptions;
import io.vertx.core.http.HttpServerRequest;
import io.vertx.core.http.HttpServerResponse;

import java.io.IOException;

import lotus.domino.NotesFactory;
import lotus.domino.NotesThread;
import lotus.domino.Session;

public class Demo {
	public static void main(String[] args) throws IOException {
		new Demo();
		int quit = 0;
		while (quit != 113) { // Wait for a keypress
			System.out.println("Press q<Enter> to stop the verticle");
			quit = System.in.read();
		}
		System.out.println("Veticle terminated");
		System.exit(0);
	}

	private static final int listenport = 8111;

	public Demo() {
		Vertx vertx = Vertx.factory.createVertx();
		HttpServerOptions options = new HttpServerOptions();
		options.setPort(listenport);
		vertx.createHttpServer(options)
				.requestHandler(new Handler<HttpServerRequest>() {
					public void handle(HttpServerRequest req) {
						HttpServerResponse resp = req.response();
						resp.headers().set("Content-Type",
								"text/plain; charset=UTF-8");
						StringBuilder b = new StringBuilder();
						try {
							NotesThread.sinitThread();
							Session s = NotesFactory.createSession();
							b.append(s.getUserName());
							NotesThread.stermThread();
						} catch (Exception e) {
							e.printStackTrace();
							b.append(e.getMessage());
						}
						resp.writeStringAndEnd(b.toString());
					}
				}).listen();
	}
}

Starting the verticle looked promising, but once I pointed my browser to http://localhost:8111/ the fun began.

Read More

14/07/2014

Cycle where?

I like to cycle, I do that often and from time to time I have fun with traffic participants. One of the interesting challenges are multi-lane crossings (note to my overseas readers: Singapore follows the British system of driving on the left, so cyclists are supposed to cycle on the left edge of the road - which makes me edgy in some situations. So for right driving countries, just flip the pictures) where the outer lane allows more than one direction. Like these:

Empty road
Road rules do require the bike to stay on the left (I didn't find the cycle symbol in SmartDraw, so I used a motorbike, just picture me on a bike there).
Staying right
What then happens, I've seen that quite often, is a vehicle closing up next to the cyclist. After all the reason why we stay on the left is not to obstruct other traffic. But that's Roadkill waiting to happen. I've been kicked off a motorbike once (got my shoulder dislocated) because a car felt it is ok to turn right when I was right of it. So I'm sensitive to this problem:
Roadkill waiting
As a result, when there's more than one lane in the direction I need to go and there is ambiguity where traffic in the same lane might be going, I make sure, that it won't happen by occupying the same space as "the big boys". I however pick my trajectory in a way that I end up at the left edge once I cleared the crossing:
Learning new Hokkien
Unsurprisingly some of the motorist aren't happy, after all they loose 2-3 seconds on their journey. So I wonder: is there a better way? Is that behaviour compliant with traffic rules? What do all these rude sounding Hokkien terms mean I do hear?

Disclaimer

This site is in no way affiliated, endorsed, sanctioned, supported, nor enlightened by Lotus Software nor IBM Corporation. I may be an employee, but the opinions, theories, facts, etc. presented here are my own and are in now way given in any official capacity. In short, these are my words and this is my site, not IBM's - and don't even begin to think otherwise. (Disclaimer shamelessly plugged from Rocky Oliver)
© 2003 - 2014 Stephan H. Wissel - some rights reserved as listed here: Creative Commons License
Unless otherwise labeled by its originating author, the content found on this site is made available under the terms of an Attribution/NonCommercial/ShareAlike Creative Commons License, with the exception that no rights are granted -- since they are not mine to grant -- in any logo, graphic design, trademarks or trade names of any type. Code samples and code downloads on this site are, unless otherwise labeled, made available under an Apache 2.0 license. Other license models are available on written request and written confirmation.