Domino Upgrade

VersionSupport end
Upgrade to 9.x now!
(see the full Lotus lifcyle) To make your upgrade a success use the Upgrade Cheat Sheet.
Contemplating to replace Notes? You have to read this! (also available on Slideshare)


Other languages on request.


Useful Tools

Get Firefox
Use OpenDNS
The support for Windows XP has come to an end . Time to consider an alternative to move on.

About Me

I am the "IBM Collaboration & Productivity Advisor" for IBM Asia Pacific. I'm based in Singapore.
Reach out to me via:
Follow notessensei on Twitter
Amazon Store
Amazon Kindle
NotesSensei's Spreadshirt shop


Adding Notes data to Websphere content manager via RSS

Websphere Content Manager (WCM) can aggregate information from various sources in many formats. A quick and dirty way to add Domino data is to use RSS. The simplest way is to add a page (not an XPage, a Page), define its content type as application/rss+xml and add a few lines to it:
<rss version="2.0">
		<title><Computed Value></title>
		<link><Computed Value>feed.xml</link>
		<description>Extraction of data from the Audience Governance database</description>
		<lastBuildDate><Computed Value></lastBuildDate>
		[Embedded view here]

Thereafter create a view with passthrou HTML with all the values for an item element. Of course that is super boring, therefore you can use the following code to speed this up.
Read More


Midget Imaginaire - Scheinzwerg

The imaginary midget

This is an attempt to transpose a concept deeply rooted in the German cultural context into another language. Bear with me.
Over at Omnisophie Professor Dueck has a column titled "Scheinzwerge" (loosely translated: imaginative dwarfs|midgets|gnomes), dealing, besides others, with the Greek crisis.
It draws heavily on a very German childhood classic "Jim Button and Luke the Engine Driver". In this famous children book and string puppet play we meet Mr. Tur Tur who is a imaginative giant (Scheinriese). From a distance Mr. Tur Tur looks like a huge giant, but the closer you come, the more normal he appears until you close enough to see that he's a normal person.
Now the "Scheinzwerg" in Dueck's article is just the opposite: the further away you are the smaller it appears. Once you are close, you see the real dimension, which tends to be way bigger than estimated, imagined or even feared.
In real life that doesn't refer to people but rather tasks, problems or missions.
We are all familiar with "Scheinriesen", that impossible huge looking task (learn to swim, to cycle, to play an instrument, or ask for permission), that shrank when we got close.
The other type is as common, but hidden in plain sight. So I shall name it "Midget Imginaire", short  MI - which is the accepted abbreviation for what it turns into when getting close enough: "Mission impossible". Now if you happen to be Ethan Hawke, all is good. For the rest of us some samples:
  • We will grow double digits, faster than the market
  • Just change the application architecture the week before life
  • The [insert-crisis] can be easily solved by [insert 140 characters or less]
  • They are just 5 little changes, the deadline must not  be moved
  • Become world champion, we know how: run 100m in 8 sec
  • Hire 9 women to give birth to one child in a month
The MI are the single biggest source of eternal tension between management (corporate and political) and executing experts (anyone: "I don't want to hear problems, I want solutions, you have 10 minutes").
Since management has (necessarily?) distance to operations (the big picture needs a vantage point to be seen), a lot of MI appear really tiny (just lets hire the right talent, never mind that pay, reputation and markets that don't have them available to us) and stuttering in execution is interpreted as incompetence or defiance.
In return the "Gods from Olympus" are seen as living in heavenly spheres (also known as management reality distortion field).

The solution is simple (I hope you can see the irony in this statement): We need to add "watching out for Midgets Imaginaire" to our professional portfolio of conduct.

Read More


Validating JSON object

One of the nice tools for rapid application development in Bluemix is Node-RED which escaped from IBM research. One passes a msg JSON object between nodes that process (mostly) the msg.payload property. A feature I like a lot is the ability to use a http input node that can listen to a POST on an URL and automatically translates the posted form into a JSON object.
The conversion runs non-discriminatory, so any field that is added to the form will end up in the JSON object.
In a real world application that's not a good idea, an object shouldn't have unexpected properties. I had asked before, so it wasn't too hard to derive a function I could use in Node-RED:
Cleaning up an incoming object - properties
this.deepclean = function(template, candidate, hasBeenCleaned) {
			var cleandit = false;
			for (var prop in candidate) {
				if (template.hasOwnProperty(prop)) {
					// We need to check strict clean and recursion
					var tProp = template[prop];
					var cProp = candidate[prop];
					// Case 1: strict checking and types are different
					if (this.strictclean && ((typeof tProp) !== (typeof cProp))) {
						delete candidate[prop];
						cleandit = true;
					// Case 2: both are objects - recursion needed	
					} else if (((typeof tProp) === "object") && ((typeof cProp) === "object")) {
						cleandit = node.deepclean(tProp, cProp, (hasBeenCleaned || cleandit));
						candidate[prop] = cProp;
				// Case 3: property is not there	
				} else {
					delete candidate[prop];
					cleandit = true;
			return (hasBeenCleaned || cleandit);			
The function is called with the template object and the incoming object and the initial parameter false. While the function could be easily used inside a function node, the better option is to wrap it into a node of its own, so it is easy to use anywhere. The details how to do that can be found on the Node-RED website. The easiest way to try the function: add your Node-RED project to version control, download the object cleaner node and unzip it into the nodes directory. Works in Bluemix and in a local Node-RED installation.

Read More


Random insights in Bluemix development (a.k.a Die Leiden des Jungen W)

Each platform comes with it's own little challenges, things that work differently than you expect. Those little things can easily steal a few hours. This post collects some of my random insights:
  • Development cycle

    I'm a big fan of offline development. My preferred way is to use a local git repository and push my code to Bluemix DevOps service to handle compilation and deployment. It comes with a few caveats
    • When you do anything beyond basic Java, you want to use Apache Maven. The dependency management is worth the learning curve. If you started with the Java boilerplate, you end up with an ANT project. Take some time, to not only mavenize it, but adjust the directories to follow the maven standards. This involves shuffling a few files around (/src vs. /src/main/java and /bin vs. /target/main/java for starters) and edit the pom.xml to remove the custom path
    • Make sure you clear out the path in the build job on Devops, maven already deploys to target. If you have specified target in Devops, you end with the code in target/target and the deploy task won't find anything
    • Learn about the liberty profile and its available feature, so you can properly specify <scope>provided</scope> in the POM.xml
    • In node.js, when you manually install a module in node_modules, that isn't pulled from a repository through an entry in package.json, that module will not be visible to standard build and deploy, since (surprise surprise) node_modules are excluded from version control and build checkout.
      Now there are a bunch of workarounds described, but I'll sum it up: don't bother. Either you move your module into a repository DevOps can reach or you build the application locally and use cf push
    • manifest.yml is your friend. Learn about it. Especially the path command. When deploying a maven build your path will be /target/[name-of-app]-[maven-version].war
    • You can specify a buildpack and environment parameters in a manifest. Works like a charm. However removing them from the manifest has no effect. You have to manually unset the values using the cf tool. Also the buildpack needs to be reset manually, so be careful there!
  • Services

    The automagical configuration of services is one of the things to love in Bluemix. This especially holds true for Java
    • The samples suggest to use the VCAP_SERVICES environment variable to get credentials and urls for your services. In short: don't. The Java Liberty build pack does a nice job making the values available though JNDI or Spring. So simply use those. To make sure that Java:comp/env can see them properly, don't forget to reference them in web.xml
    • In diversion from this: I found the mqLite Java classes less stressful that configuring JMS via JNDI. The developers did a good job making that library too work automagical on Bluemix.
    • For some services (e.g. JAX-RS 2.0 client; BlueMix SSO) you do have to touch the server.xml.
      The two methods are a packaged server or a server directory. The former requires a local liberty profile installed, so I prefer the later. It is actually easier than it sounds. In your (Maven) project, you create new directories DefaultServer and DefaultServer/apps (case sensitive!). You create/edit the server.xml in the DefaultServer directory. Then check for your maven plugin in pom.xml and change the output directory (in bold):

      Then you can deploy your application using mvn install and cf push [appname] -p defaultServer. These two commands work in DevOps too!
    • The SSO service is "Single Sign On", there is no real "Single Sign Out". That's not an issue specific to Bluemix, but something all SSO solutions struggle with, just to be clear what to expect. The login dialog is ugly, but fully customizable. The nature of SSO (Corporate and/or a public provider) makes it a minimal provider: identity only, no roles, attributes or groups. In the spirit of micro services: build a rest based service for that
  • Node-RED

    While it is advertised as IoT tool, there is much more to this little gem
    • Node-RED runs on Bluemix, your local PC or even a Rasberry Pi. For the later head over to The Thingbox to get your ready OS image
    • Node-RED can be easily expanded, there are tons of ready modules at Node-RED flows. Not all are suitable for Bluemix (e.g. the ones talking Bluetooth), but a local Node-RED can easily talk to a Bluemix Node-RED making it easy for applications to run distributed
    • My little favourite: connect a HTTP post input directly to a Cloudant output. Node-RED converts the encoded form into a JSON object you can drop into the database as is. You might want to add a small filter (a compute node) to avoid data contamination
As usual YMMV


Investigating JNDI

When developing Java, locally or for Bluemix a best practise is to use JNDI to access resources and services you use. In Cloud Foundry all services are listed in the VCAP_SERVICES environment variable and could be parsed as JSON string. However this would make the application platform dependent, which is something you want to avoid.
Typically a JNDI service requires to edit the server.xml to point to the right service. However editing the server.xml in Bluemix is something you do want to avoid as much as possible. Luckily the Websphere Java Liberty Buildpack, which is the one Bluemix uses for Java by default, does handle that for you automagic and all Bluemix services turn into discoverable JNDI objects. So far in theory. I found myself in the tricky situation to check what services are actually there. So I wrote some code that turns the available JNDI objects into a JSON string.
    public Response getJndi() {
        StringBuilder b = new StringBuilder();
        b.append("{ \"java:comp\" : [");
        this.renderJndi("java:comp", b);

        return Response.status(Status.OK).entity(b.toString()).build();

    private void renderJndi(String prefix, StringBuilder b) {
        boolean isFirst = true;

        try {
            InitialContext ic = new InitialContext();
            NamingEnumerationlt;NameClassPairgt; list = ic.list(prefix);
            while (list.hasMore()) {
                if (!isFirst) {
                    b.append(", \n");

                NameClassPair ncp =;
                String theName = ncp.getName();
                String className = ncp.getClassName();

                b.append("{\"name\" : \"");
                b.append("\"javaClass\" : \"");

                if ("javax.naming.Context".equals(className)) {
                    b.append(", \"children\" : [");
                    this.renderJndi(prefix + (prefix.endsWith(":") ? "" : "/") + theName, b);
                isFirst = false;
        } catch (Exception e) {


Enjoy - As usual you YMMV


Adventures with Node-RED

Node-RED is a project that succesfully escaped "ET" - not the alien but IBM's Emerging Technology group. Build on top of node.js, Node-RED runs in many places, including the Rasberry PI and IBM Bluemix.
In Node-RED the flow between nodes is graphically represented by lines you drag between them, requiring just a little scripting to get them going.
The interesting part are the nodes that are available (unless you fancy to write your own): A large array of ready made flows with nodes and sample applications makes Node-RED extremly flexible (I wonder if it would make sense to build a workflow engine with it). In case you don't find a node you fancy, you can build your own. Not all nodes are created equal, so you need to check what works. When you run Node-RED on Bluemix, you won't get access to hardware like serial port or Bluetooth, but you gain a DNS addressable IP endpoint (you are not limited to http(s)). Furthermore, IBM provides direct access to the IBM IoT cloud, that takes the headache out of device configuration by providing an extensive library of device libraries.
So how to get additional nodes, own or others, onto Bluemix? Here are the steps:
  1. create a new application with the IoT Boilerplate
  2. link that application to version control on
  3. clone the repository locally git clone ...
  4. edit the package.json and add the item you would like to add
  5. commit and push the changes back to jazzhub and let "build and deploy" sort it out

Read More


Your API needs a plan (a.k.a. API Management)

You drank the API Economy cool aid and created some neat https addressable calls using Restify or JAX-RS. Digging deeper into the concept of micro services you realize, a https callable endpoint doesn't make it an API. There are a few more steps involved.
O'Reilly provides a nice summary in the book Building Microservices, so you might want to add that to your reading list. In a nutshell:
  • You need to document your APIs. The most popular tool here seems to be Swagger and WSDL 2.0 (I also like Apiary)
  • You need to manage who is calling your API. The established mechanism is to use API keys. Those need to be issued, managed and monitored
  • You need to manage when your API is called. Depending on the ability of your infrastructure (or your ability to pay for scale out) you need to limit the rate your API is called by second, hour or billing period
  • You need to manage how your API is called. In which sequence, is the call clean, where does it come from
  • You need to manage versions of your API, so innovations and improvements don't break existing code
  • You need to manage grouping of your endpoints into "packages" like: free API, fremium API, partner API, pro API etc. Since the calls will overlap, building code for the bundles would lead to duplicates
And of course, all of this need statistics and monitoring. Adding that to you code will create quite some overhead, so I would suggest: use a service for that.
In IBM Bluemix there is the API Management service. This service isn't a new invention, but the existing IBM Cloud API management made available in a consumption based pricing model.
Your first 5000 calls are free, as is your first developer account. After that is is less than 6USD (pricing as of May 2015) for 100,000 calls. This provides a low investment way to evaluate the power of IBM API Management.
API Management IBM Style
The diagram shows the general structure. Your APIs only need to talk to the IBM cloud, removing the headache of security, packet monitoring etc.
Once you build your API you then expose it back to Bluemix as a custom service. It will appear like any other service in your catalogue. The purpose of this is to make it simple using those APIs from Bluemix - you just read your VCAP_SERVICES.
But you are not limited to use these APIs from Bluemix. You can call the IBM API management directly (your API partners/customers will like that) from whatever has access to the Intertubes.
There are excellent resources published to get you started. Now that you know why, check out the how: If you not sure about that whole micro services thing, check out Chris' example code.
As usual YMMV


The Rise of JavaScript and Docker

I loosely used JavaScript in this headline to refers to a set of technologies: node.js, Meteor, Angular.js ( or React.js). They share a communality with Docker that explains their (pun intended) meteoric rise.
Lets take a step back:
JavaScript on the server isn't exactly new. The first server side JavaScript was implemented 1998 and the Union mount, that made Docker possible, is from 1990. Client side JavaScript frameworks are plenty too. So what made the mentioned ones so successful?
I make the claim that it is machine readable community. This is where these tools differ. node.js is inseparable from its packet manager npm. Docker is unimaginable without its registry and Angular/React (as well as jquery) live on cushions of myriads of plug-ins and extensions. While the registries/repositories are native to Docker and node.js, the front-ends take advantage of tools like Bower and Yeoman, that make all the packaged feel native.
These registries aren't read-only, which is a huge point. Providing the means of direct contribution and/or branching on GitHub the process of contribution and consumption became two way. The mere possibility to "give back" created a stronger sense of belonging (even if that sense might not be fully concious).
machine readable community is a natural evolution born out of the open source spirit. For decades developers have collaborated using chat (IRC anyone), discussion boards, Q & A sites and code sharing places. With the emergence of GIT and GitHub as de facto standard for code sharing the community was ready.
The direct access from scripts and configurations to source repository replaced the flow of "human vetting, human download, human unpack and copy to the right location" with: "specify what you need and the machine will know where to get it". Even this idea wasn't new. In the Java the Maven plug-in provided that functionality since 2002.
The big difference now: Maven wasn't native to Java, as it required a change of habit. Things are done differently with it than without. npm on the other hand is "how you do things in node.js". Configuring a docker container is done using the registry (and you have to put in extra effort if you want to avoid that).
So all the new tooling use repositories as "this is how it works" and complement human readable community with machine readable community. Of course, there is technical merit too - but that has been discussed elsewhere in great length.


Cloud with a chance of TAR balls (or: what is your exit strategy)

Cloud computing is here to stay, since it does have many benefits. However even unions made "until death do us part" come with wagers these days. So it is prudent for your cloud strategy to contemplate an exit strategy.
Such a strategy depends on the flavour of cloud you have chosen (IaaS, PaaS, SaaS, BaaS) and might require to adjust the way you on-board in the first place. Let me shed some light on the options:


When renting virtual machines from a book seller, a complete box from classic hosting provider or a mix of bare metal and virtual boxes from IBM, the machine part is easy: can you copy the VM image over the network (SSH, HTTPS, SFTP) to a new location? When you have a bare metal box, that won't work (there isn't a VM after all), so you need a classic "move everything inside" strategy.
If you drank the Docker cool aid, the task might be just be broken down into managable junks, thanks to the containers. Be aware: Docker welds you to a choice of host operating systems (and Windows isn't currently on the host list).
There are secondary considerations: how easy is it, to switch the value-added services like: DNS, CDN, Management console etc. on/off or to another vendor?


Here you need to look separately at runtime and the services you use. Runtimes like Java, JavaScript, Phython or PHP tend to be offered by almost all vendors. dotNet and C# not so much. When your cloud platform vendor has embraced an open standard, it is most likely, that you can deploy your application code elsewhere too, including back into your own data center or a bunch of rented IaaS devices.
It get a little more complicated when you look at the services.
First look at persistence: is your data stored in a vendor propriety database? If yes, you probably can export it, but need to switch to a different database when switching cloud vendors. This means you need to alter your code and retest (but you do that with CI anyway?). So before your jump onto DocumentDB or DynamoDB (which run in a single vendor's PaaS only), you might want to checkout MongoDB, CouchDB (and its commercial siblings Cloudant or Couchbase) , Redis or OrientDB which run in multiple vendor environments.
The same applies to SQL databases and blob stores. This is not a recommendation for a specific technology (SQL vs. NoSQL or Vendor A vs. Vendor B), but an aspect you must consider in your cloud strategy.
The next check point are the services you use. Here you have to distinguish between common services, that are offered by multiple cloud vendors: DNS, auto scaling, messaging (MQ and eMail) etc. and services specific to one vendor (like IBM's Watson).
Taking a stand "If a service isn't offered by multiple vendors, we won't use it" can help you avoid a lock-in and will ensure that you stifle your innovation too. After all, you use a service, not for the sake of the service, but to solve a business problem and to innovate.
The more sensible approach would be to check if you can limit your exposure to a vendor to that special services only, should you decide to move on. This gives you the breathing space to then look for alternatives. Adding a market watch to see how alternatives might evolve improves your hedging.
Services are the "Damned if you do, damned if you don't" area of PaaS. All vendors scramble to provide top performance and availability for the common platform and distinction in the services on top of that.
After all one big plus of the PaaS environment are the services that enable "composable businesses" - and save you the headache to code them yourself. IMHO the best risk mitigation, and incidentally state of the art, is a sound API management a.k.a Microservices.
Once you are there, you will learn, that a classic Monolithic Architecture isn't cloud native (Those architectures survive inside of Virtual Machines) - but that's a story for another time.


Here you deal with applications like IBM Connections Cloud S1, Google Apps for Work, Microsoft Office 365, Salesforce, SAP SaaS but also Slack, Basecamp,Github and gazillions more.
Some of them (e.g. eMail or documents) have open standard or industry dominating formats. Here you need to make sure, you get the data out in that format. I like the way Google is approaching this task. They offer Google Takeout, that tries to stick to standard formats and offers all data, any time for export.
Other have at least machine readable formats like CSV, JSON, XML. The nice challenge: getting data out is only half the task. Is your new destination capable of taking them back in?


In a business process as a service (BaaS) the same considerations as the SaaS environment come to play: can I export data in a machine-readable, preferably industry standard format. E.g. you used a payroll service and want to bring it back inhouse or move to a different service provider. You need to make sure your master data can be exported and that you have the reports for historical records. When covered in reports, you might get away without transactional data. Typical formats are: CSV, JSON, XML

As you can see, not rocket science, but a lot to consider. For all options the same: do you have what it takes to move? Is there enough bandwidth (physical and mental) to pull it off? So don't get carried away with the wedding preparations and check your prenuptials.


email Dashboard for the rest of us - Part 2

In Part 1 I introduced a potential set of Java interfaces for the dashboard. In this installment I'll have a look on how to extract this data from a mail database. There are several considerations to be taken into account:
  • The source needs to supply data only from a defined range of dates - I will use 14 as an example
  • The type of entries needed are:
    • eMails
    • replies
    • Calendar entries
    • Followups I'm waiting for
    • Followups I need to action
  • Data needs to be in details and in summary (counts)
  • People involved come in Notes addresses, groups and internet addresses, they need to be dealt with
Since I have more than a hammer, I can split the data retrieval into different tooling. Dealing with names vs. groups is something best done with LDAP code or lookups into an address book. So I leave that to Java later on. Also running a counter when reading individual entries works quite well in Java.
Everything else, short of the icons for the people, can be supplied by a classis Notes view (your knowledge of formula language finally pays off).
Read More


This site is in no way affiliated, endorsed, sanctioned, supported, nor enlightened by Lotus Software nor IBM Corporation. I may be an employee, but the opinions, theories, facts, etc. presented here are my own and are in now way given in any official capacity. In short, these are my words and this is my site, not IBM's - and don't even begin to think otherwise. (Disclaimer shamelessly plugged from Rocky Oliver)
© 2003 - 2015 Stephan H. Wissel - some rights reserved as listed here: Creative Commons License
Unless otherwise labeled by its originating author, the content found on this site is made available under the terms of an Attribution/NonCommercial/ShareAlike Creative Commons License, with the exception that no rights are granted -- since they are not mine to grant -- in any logo, graphic design, trademarks or trade names of any type. Code samples and code downloads on this site are, unless otherwise labeled, made available under an Apache 2.0 license. Other license models are available on written request and written confirmation.