Usability - Productivity - Business - The web - Singapore & Twins

By Date: January 2013

Exceptional Customer Experience - of the bad kind (yes VirginAtlantic I talk about you)

An action filled week at Connection 2013 drew to a conclusion and I'm ready to go home (with a little stopover in MUC). Presuming my trip is well taken care of by American Express Travel and the majority Singapore Airlines owned Virgin Atlantic I arrive at Orlando Airport to check in. At the display I see that the flight is delayed which would give me an incredible short 20 min to change planes in Manchester.
Given the fact, that there's the silly system in place that requires me to take a shuttle from Terminal 2 to Terminal 1 and back, that seems impossible. So I innocently ask what to do. The check-in attendant doesn't know and calls in the supervisor. He looks at the ticket and tells me: your flight is with Virgin to Manchester and everything else is not his business. He stresses, pointing at the ticket: This is a legal document and for us your destination is Manchester. If anything I need to check with Amex. So I call Amex, go through phone hell and get a ticket on LH from Manchester to Munich. Having that sorted, I ask to get my baggage checked through. I get the same @#$%* answer: For Virgin my journey ends in Manchester and I have to go through immigration, get my luggage and check in again.
So I ask: If my ticket would be a SIA ticket (the flight is code share), would he then need to take care of me? The answer yes: this is completely different and it would be Virgin's problem to get me sorted. The fact that (still) SIA owns a majority stake in Virgin doesn't matter for my situation.
Amex, who calls their phone operators "Travel counsellors" failed to highlight the potential that any delay with 2 different tickets would become my problem. To add insult to injury: I had an all SIA ticket first, but Amex claimed they couldn't update a flight part (which the SIA help line claimed they actually could), so they cancelled that and split it in two.
With hundreds of travel days for IBM I never has such a exceptionally BAD service.
Update: Arriving in MAN, the ground staff was waiting for me and I would have made the connection, but when they saw that the luggage wasn't checked through to Munich, they couldn't rush me to the plane, but had to ask me to pass through immigration to fetch my luggage and pass through immigration again to check it in. Needless to say that made it impossible to get to the (only) SIA flight that day. So I got on the LH flight. In Munich I called SIA to clarify that I would continue on the second leg of the trip, which turned out to come with a higher ticket price (since the trip was shorter now) and a service fee. This will make an interesting debriefing

Posted by on 31 January 2013 | Comments (2) | categories: Travel

Connect 2013 in one picture

Where Knowledge Goes To Die
Update: The tombstone sans the text is from a political blog by Andy Barefoot, who provides a tombstone generator. The page states "create your own poster", which might or might not state a copyright statement for a generated poster. The sentence eMail is where knowledge goes to die gets attributed to Bill French, but might have many parents. "Stop sending, start sharing" is my sentence, but I'm sure someone possibly has said that before (I just didn't come across it).
Now it is anybody's guess if the combination of all these constitutes a) a new asset in its own right, b) (re)use of the stated items is covered by fair use or a license - c) that the necessity to think about that is plain mad.
If your conclusion is: It constitutes an original art work by me, then the Creative Commons License as stated below would be in effect.

Posted by on 31 January 2013 | Comments (6) | categories: IBM

Running a CouchDB with the authenticated Apache HTTP user

Apache CouchDB shares the same Stallgeruch with Domino thanks to sharing a warden at some time. So during the festive season I gave it a spin.
There is ample literature around to get you started with CouchDB including Apache's own wiki.
So I was looking for something more sporty.
Since Domino 9.0 ships with IBM's version of the Apache HTTP server I was wondering if I could setup couchDB behind an Apache reverse Proxy and make couchDB recognize the authenticated user accessing the couchDB as kind of a poor man's single sign on.
I used Apache's basic authentication (only via HTTPs please), but it, in theory, would work with any authentication scheme that provides a username as outcome.
The whole solution required a bit of chicken string and duct tape combination of Apache modules, but works surprisingly well.
The participants:
  • proxy_authentification_handler (note the unconventional spelling): a couchDB module that accepts authentication information in the request header. You have to add that to the httpd section for the key authentication handlers. My entry looks like this: {couch_httpd_oauth, oauth_authentication_handler}, {couch_httpd_auth, proxy_authentification_handler}, {couch_httpd_auth, cookie_authentication_handler}, {couch_httpd_auth, default_authentication_handler}
  • mod_headers: Create, remove, alter headers. Anything coming in gets stripped of eventually fake headers and then the couchDB headers reapplied.
  • mod_proxy: The core proxy capability
  • mod_rewrite: The dark magic of Apache. Used here mainly to lookup roles
  • mod_auth_basic: Used for authentication here, any other mechanis should work too
The whole magic lies in the Apache configuration (typically to find in /etc/apache2/sites-enabled. Here is what worked for me:
  • In line 5-7 I remove any header that might be in the original request to prevent identity spoofing
  • Line 10 allows slashes to be transmitted encoded. I found it wouldn't work without that
  • Line 13-19 are standard Apache static files
  • Line 22-25 establish the regular reverse proxy pattern with the forward proxying switched off, nothing special there
  • Line 28 defines a simple lookup map which in a production system probably would be a LDAP or database query
  • Line 31-36 establish the authentication mechanism. For a production system you would use something more sophisticated to run
  • Line 39 is essential: it simply states: only authenticated users here please
  • The dark magic happens in lines 42-48
  • Line 43 and 45 extract the identified user for use in a RewriteRule. It seems you can use the extracted variable only once, hence the duplicate lines (I might also simply not be skilled enough )
  • Line 44 assigns the current user to the variable CUSER
  • Line 46 looks up the roles the user has into CROLE. Make sure your admin user has the role _admin. Multiple entries separated by comma and no spaces. If a user has no roles, (s)he is assigned the guest role
  • Lines 47/48 finally add them to the header
  • I didn't use the Token in this example
As usual YMMV - enjoy

Read more

Posted by on 23 January 2013 | Comments (2) | categories: CouchDB

10 Commandments for public facing web applications

A customer recently asked how a public facing web application on Domino would be different from an Intranet application. In general there shouldn't be a difference. However in a cost/benefit analysis Intranets are usually considered "friendly territory", so less effort is spent on hardening against attacks and poking around (much to my delight, when I actually poke around). With this in mind here you go (in no specific order):
  1. Protect your server: Typically you would have a firewall and reverse proxy that provides access to your application.
    It should be configured to check URLs carefully to ensure no unexpected calls are made from somebody probing database URLs. It is quite some work to get that right (for any platform), but you surly don't want to become "data leak" front-page news.
    There's not much to do on the Domino side, it is mostly the firewall guys' work. Typical attack attempts include stuff like ?ReadViewEntries, $Defaultxx or $First. Of course when you use Ajax calls into views you need to cater for that.
    I would block *all* ?ReadViewEntries and have URL masks for the Ajax calls you plan to use. Be careful with categorized views. Avoid them if possible and always select "hide empty categories". Have an empty $$ViewTemplateDefault that redirects to the application
  2. Mask your URLs: Users shouldn't go to "/newApp/Loand2013/loanapproduction.nsf?Open" but to "/loans". Use Internet site documents to configure that (eventually the firewall/reverse proxy can do that too). In Notes 9.0 IBM provides mod_domino, so you can use the IBM HTTP Server (a.k.a Apache HTTP) to front Domino. On the XPagesWiki there is more information on securing URLs with redirects. Go and read them
  3. Harden your agents: Do not allow any ?OpenAgent URL (challenge: an agent also opens on ?Open, so if all agents have a certain naming you can use URL pattern to block them). In an agent make sure your code handles errors properly. Check where the call to an agent came from. If it was called directly discard it.
  4. Treat data with suspicion: Do not rely on client side validation. Providing it is nice for the user as comfortable input aid. However you don't control the devices and browsers anymore and an attacker can use Firebug or CURL to bypass any of your validations. You have to validate everything on the server (again). Also you have to check content for unexpected input like passthru HTML or JavaScript. XPages does that for you
  5. Know your user: Split your application into more than one database. One for the publicly accessible content (access anonymous) and one that requires authentication. Do not try to dodge authenticated users and re-invent security mechanisms. You *will* overlook something and then your organisation makes headline news in the "latest data breach" section. There are ample examples how to generate LTPA tokens outside of Domino, so you don't need to manage usernames/passwords etc if you don't want to. Connect them to your existing customer authentication scheme (e.g. eBanking if you are a bank) for starters. Do not rely on some cookie you try to interpret and then show or don't show content. The security tools at your hand are are ACL and reader fields
  6. Test, Test, Test: You can Usability test, load test, functional test, penetration test, validity test, speed test and unit test. If you don't test, the general public and interested 3rd parties will do that for you. The former leads to bad press, the later to data breaches
  7. Use a responsive layout: Use the IBM OneUI (v3.0 as of this blog date) or Bootstrap (get a nice theme). XPages provides great mobile controls. Using an XPage single page application you can limit the range of allowed URLs to further protect your assets
  8. Code for the most modern browser: use HTML5 and degrade gracefully. So it is not "must look the same in all browsers", but "users must be able to complete tasks in all browsers" - experienced might differ. Take advantage of local cache (use an ETag and all the other tips!)
  9. Use https the very moment a user is known. If in doubt try Firesheep
  10. Of course the Spolsky Test applies here too!
As usual YMMV

Posted by on 23 January 2013 | Comments (3) | categories: Show-N-Tell Thursday Software

What happened when - the Notes 1-9 time line

The history of Lotus IBM Notes makes an interesting read (there's a Wikipedia version too). Since 1989 (that's 24 years) Notes has delivered releases that are fiercely backwards compatible™. I loaded the nifty fifty into a current Notes client and the R2 databases worked (after a compact) just fine. I like to put things into perspective:
Notes from version 1.0 to 9.0
There are a few factoids that are quite interesting:
  • Linux is almost as old as Notes
  • The first public release of MS Exchange was 4.0 when Notes released 4.5. Ein Schelm wer einen Zufall vermutet
  • Symbian predated Blackberry by 5 years
  • Sharepoint was introduced in the same year as IE6, could there be coincidences to the pain level?
  • Android was released in the same year as the iPhone, but it took a year before phones were available
  • There seems to be a huge period of inactivity after the 5.0 release.
    However there were not less than 12 point releases of the 5.0 code stream (5.0.1 - 5.0.13) with 5.0.8 the most popular (as I recall).
  • The biggest expansion of Notes were the following versions 6.0 / 6.5. So if history, as common saying goes, repeats itself, we are in for interesting days for Notes 9.0
  • 2004/2005 was a big year for Internet technology: the term Ajax was coined, Ubuntu and Firefox had their 1.0 release
  • There seems to be a huge gap between 8.5 and 9.0 of 5 years. However in each of this years IBM delivered a point release with new functionality (just some highlights listed, check the full release notes to see all):
    • 2009 8.5.1: XPiNC: XPages in the Notes client
    • 2010 8.5.2: Managed replicas, private iCalendar feeds
    • 2011 8.5.3: Performance, Security and Integration improvements (with fixpacks in 2012)
    Of course that version policy has a dark side: business users typically don't care for the numbers behind the dot and perceived Notes as "in maintenance mode" while it is alive and kicking
There are exiting times ahead for IBM Notes. See you in Orlando. Say hi.

Posted by on 21 January 2013 | Comments (3) | categories: IBM Notes

Generating Test data

You have seen that over and over: Test1, qwertyyui, asdfgh as data entered in development to test an application. Short of borrowing a copy of production data, having useful test data is a pain in the neck. For my currently limited exposure to development (after all I work as pre Sales engineer) I use a Java class that helps me to generate random result from a set of selection values. To make this work I used the following libraries:
  • JodaTime: takes the headache out of date calculation
  • gson: save and load JSON, in this case the random data
  • Lorem Ipsum: generate blocks of text that look good
  • Here you go:
    package com.notessensei.randomdata ;

    import java.io.InputStream ;
    import java.io.InputStreamReader ;
    import java.io.OutputStream ;
    import java.io.PrintWriter ;
    import java.util.Date ;
    import java.util.HashMap ;
    import java.util.List ;
    import java.util.Map ;
    import java.util.Random ;

    import org.joda.time.DateTime ;

    import com.google.gson.Gson ;

    import de.svenjacobs.loremipsum.LoremIpsum ;

     * Source of random data to generate test data for anything before use you need
     * either load lists of your data using addRandomStringSource or load a JSON
     * file from a previous run using loadDataFromJson
     * @author NotesSensei

    public class RandomLoader {

         * How often should the random generator try for getRandomString with
         * exclusion before it gives up

        public static final int             MAX_RANDOM_TRIES    = 100 ;
        private Map < String, List < String >>   randomStrings ;
        private Random                      randomGenerator ;

         * Initialize all things random

        public RandomLoader ( ) {
            this. randomStrings = new HashMap < String, List < String >> ( ) ;
            this. randomGenerator = new Random ( new Date ( ). getTime ( ) ) ;

         * Adds or ammends a collection of values to draw from

        public void addRandomStringSource ( String sourceName, List < String > sourceMembers ) {
            if ( ! this. randomStrings. containsKey (sourceName ) ) {
                this. randomStrings. put (sourceName, sourceMembers ) ;
            } else {
                // We have a list of this name, so we add the values
                List < String > existingList = this. randomStrings. get (sourceName ) ;
                for ( String newMember : sourceMembers ) {
                    existingList. add (newMember ) ;

         * Get rid of a list we don't need anymore
         * @param sourceName

        public void dropRandomStringSource ( String sourceName ) {
            if ( this. randomStrings. containsKey (sourceName ) ) {
                this. randomStrings. remove (sourceName ) ;

         * Gets a random value from a predefined list

        public String getRandomString ( String sourceName ) {
            if ( this. randomStrings. containsValue (sourceName ) ) {
                List < String > sourceCollection = this. randomStrings. get (sourceName ) ;
                int whichValue = this. randomGenerator. nextInt (sourceCollection. size ( ) ) ;
                return sourceCollection. get (whichValue ) ;
            // If we don't have that list we return the requested list name
            return sourceName ;

         * Get a random String, but not the value specified Good for populating
         *travel to (exclude the from) or from/to message pairs etc
         *@param sourceName
         *            from which list
         *@param excludedResult
         *            what not to return

        public String getRandomStringButNot ( String sourceName, String excludedResult ) {
            String result = null ;
            for ( int i = 0 ; i < MAX_RANDOM_TRIES ; i++ ) {
                result = this. getRandomString (sourceName ) ;
                if ( !result. equals (excludedResult ) ) {
                    break ;
            return result ;

         * For populating whole paragraphs of random text LoremIpsum Style

        public String getRandomParagraph ( int numberOfWords ) {
            LoremIpsum li = new LoremIpsum ( ) ;
            return li. getWords (numberOfWords ) ;

         *Get a date in the future

        public Date getFutureDate ( Date startDate, int maxDaysDistance ) {
            int actualDayDistance = this. randomGenerator. nextInt (maxDaysDistance + 1 ) ;
            DateTime jdt = new org. joda. time. DateTime (startDate ) ;
            DateTime jodaResult = jdt. plusDays (actualDayDistance ) ;
            return jodaResult. toDate ( ) ;

         * Get a date in the past, good for approval simulation

        public Date getPastDate ( Date startDate, int maxDaysDistance ) {
            int actualDayDistance = this. randomGenerator. nextInt (maxDaysDistance + 1 ) ;
            DateTime jdt = new org. joda. time. DateTime (startDate ) ;
            DateTime jodaResult = jdt. minusDays (actualDayDistance ) ;
            return jodaResult. toDate ( ) ;

         *Lots of applications are about $$ approvals, so we need a generator

        public float getRandomAmount ( float minimum, float maximum ) {
            // between 0.0 and 1.0F
            float seedValue = this. randomGenerator. nextFloat ( ) ;
            return (minimum + ( (maximum - minimum ) * seedValue ) ) ;

         * Save the random strings to a JSON file for reuse

        public void saveDatatoJson ( OutputStream out ) {
            Gson gson = new Gson ( ) ;
            PrintWriter writer = new PrintWriter (out ) ;
            gson. toJson ( this, writer ) ;

         * Load a saved JSON file to populate the random strings

        public static RandomLoader loadDataFromJson ( InputStream in ) {
            InputStreamReader reader = new InputStreamReader (in ) ;
            Gson gson = new Gson ( ) ;
            RandomLoader result = gson. fromJson (reader, RandomLoader. class ) ;
            return result ;
    As usual YMMV

Posted by on 18 January 2013 | Comments (0) | categories: Software

Mobile Application Interaction Models

Latest since Eric Schmidt announced the mobile first doctrine in Barcelona, every developer knows that it is coming.
Of course with the fragmentation of the runtimes (think Android, i/OS, Blackberry, Bada, Windows Phone 8 etc.) and the development platforms (Objective C, C++, Java, C#) the discussion rages on: is a web application (think HTML, CSS, JavaScript) sufficient or do I really need to write native code for each platform? I covered my view on the options before.
At a closer look, the difference is not so much about how an application is developed, but about the interaction model used. Of course each development environment leans towards a specific interaction model. Web applications tend to interact online, while native applications can do anything, but work tentatively offline (think Angry Birds)
One obvious problem with online applications is network coverage (think: everywhere, just not in this conference room or plane), another perceived one is bandwidth. The situation is improving and obscures the real issue: Latency. An online application works roughly like this:
Mobile Application with network traffic
The data packages are, once the main page is loaded, actually quite small, but frequent.
How to explain latency? I usually take the restaurant as example. Bandwidth is the size of the tray the waiter can carry. In a small bistro the waiter needs to go back and forth a number of times if you and your football team (soccer for our American friends) order their beers at once - you have a bandwidth problem - one that doesn't exist on the Oktoberfest. Latency however is the time from calling the waitress until she appears and the time she needs to place the order. Now imagine instead of ordering your 11 beers in one go, you do that sequentially one by one (besides p*****g the waitress off), you have a latency problem. The waitress spends more time running back and forth than actually serving beer.
Network Latency is the issue here
As long as you sit in a well connected environment, you won't experience much:
  • My server in the home network has a latency of about 0.2 ms
  • In the IBM office the local servers have about 6-8ms (Switches & Firewalls take their toll)
  • when I reach out to an overseas server that is not cached by a CDN I get latencies of 200-300ms
  • On mobile 2 or 3G in a crowded place that latency easily goes up to 1.5sec
Now imagine you have an application that makes 100 small Ajax calls like populating a dropbox, doing a typeahead, load contact details etc.
  • In my local network that amounts to a total delay of 20ms, not noticeable
  • In the office it is still below a second
  • overseas it is already 30sec
  • on the patchy mobile network 2.5 minutes, rendering such an application useless.
The solution here, available to native and web applications is to stick to rule 1 of application development:

Keep the network out of the user experience!

This sounds easier and harder than it actually is:
Mobile Application with local interaction
The UI would only interact with local content and all network communication would, as far as possible, happen in the background. HTML5 knows the storage API and is quite offline capable. Background operations are supported using web workers or (via Titanium) MQTT (my personal favourite).
Of course that mode is much harder to master - which applies to the native applications too: suddenly you have data on a central server and a mobile device and need to keep them in sync.
This is manageable for One-user-access applications like email, but quite a headache for concurrent access applications like Collaboration, ERP or CRM. Short of reinventing Notes replication (someone else did that) you could treat the local data as "cache only" and queue updates. Of course proper sync needs extra meta data, quite a headache for an RDBMS.

Posted by on 15 January 2013 | Comments (2) | categories: Mobile XPages

How we successfully killed eMail (almost)

A recent conversation (in 140 characters or less) with Alan and a thought exchange with Luis got me thinking (again) about the death of eMail, namely the death wish the #SocBiz movement has for it.
When looking at the general discussion three items constantly get mixed up:
  • "eMail the transport" (SMTP for that matter)
  • "eMail the software" to deal with what arrives
  • "eMail the habit" -- of swamping people with irrelevant information and hiding relevant information from others
The transport is a wild success and together with http holds the intertubes together. Innovation in the inbox (which I'm quite fond of) has been rather glacial (unless Store form in document Embedded Experiences, with their painful network dependency, are the next big thing). There areh a few pockets of hope.

. Kicking the habit is what #SocBiz is all about (yes - this is a gross simplification)

Of course all the software vendors, including my employer, the market leader in Social Software, will help you there with apparel, training videos and supplements, where you might need is just some coaching.
Reflecting on past projects back home, I recalled one where we actually almost completely killed eMail off - more than a decade ago. The customer was a project driven organization (projects like: build a new quarters in this town, build the biggest, tallest...) The secret recipe was this:
Everything is a workflow
Messages would come in from the outside, via SMTP, SMS, pager, paper or Fax. They ended in a central processing database where they were enriched with meta data (manual in the beginning, with more and more automation / decision support later on) and assigned to a person to act on. Once a message was fed into the queue it would show up with the supplier, the project, the status, the document type.
Instead of sending gazillions of useless CCs to "keep people in the loop", users simply could watch any of the meta data of their choosing (and access rights of course). It would also allow to see if someone was swamped with actionables and help him out.
What was very interesting in the implementation project: We probably spend by a magnitude more time on defining the meta data structure and coaching adoption than on coding the tool.
We also had an army of nay sayers: " doesn't fit, it's against the law, our organisation is different, people won't change" etc. - This is why we spend all the time in implementation coaching. It took a year to finish, but work satisfaction and productivity went up greatly.
And I couldn't agree more with Alan: Less talking, more doing. A case for GGTD!

Posted by on 10 January 2013 | Comments (1) | categories: Business

Explaining web enablement challenges to business users

With XPages Notes and Domino application can be the new sexy and run beautifully on all sorts of devices big and small. So a whole cottage industry (no insult intended) of offerings around Domino Application Modernization appeared.
Modernization always also means: browser and mobile enablement.
Expectations ran high, that a magic button would transform (pun intended) a code base organically grown over two decades into beautiful working responsive web 2.0 applications. But GIGO stands firm and not all applications are created equal. Domino's (and LotusScript's) greatest strength, turned into a curse: being an incredible forgiving environment. Any clobbered together code would somehow still run and lots of applications truly deserve the label " contains Frankencode".
There is a lot of technical debt than needs to be paid.
The biggest obstacle I've come across is the wild mix of the front-end (a.k.a Notes client) and back-end (core database operations) in forms views and libraries. This problem never arises in the popular web environments, since there are different languages at the front and back at work (e.g. JavaScript/PHP, JavaScript/Ruby, JavaScript/Java) - only in very modern environments it is all JavaScript (the single language idea Notes sported 20 years ago).
The first thing I taught every developer in LotusScript, is to keep front- and backend separate and keep the business logic in script libraries that only contain back-end classes. Developers who followed these guidelines have a comparable easy time to web enable application.
But how to explain this problem to a business user (who probably saw some advertisement about automatic conversion to web, be it on IBM technology or a competitor)?
Tell them a story (if they are not interested at listening at any of that, there's a solution too)!
Here we go:
You are supply specialist for natural resources exploration company and your current assignment is to get your geo engineers set up in a remote jungle location. So you have to source vehicles, build roads and establish a supply chain. Probably you get a bunch of those (a living legent since 1948), stock spare parts and ensure that you have diesel stations along the way.
Everything is fine - the road might be a little patchy here and there, but that's not a problem, you get you guys delivered and working. You even look good (sometimes).
This are your Notes client applications, delivering business value, robust, efficient and can deal with a lot of road deficiency (that would be code quality).
Your remote location becomes successful and suddenly the requirements change. People want to get there in style (Browsers). Your gas stations will do, no problem here, but already the roads need to be a little less patchy and your stock of spare parts and the mechanics trained on them are useless. That would be your front-end classes and the "mix-them-all-up" coding style that worked in the past.
If the "arrive-in-style" meme escalates further (mobile devices) you need to build flawless roads (unless your oil has been found in Dallas where proper roads supposedly exist).
An experienced supply planner might anticipate what is coming and while sending in the Unimogs already prepare the gravel foundation, so paving of the road for the fragile cars is just a small step. Or nothing has been done for a while and the health road check comes back with a huge bill of material.
You get the gist, now go and tell your own story.

Posted by on 04 January 2013 | Comments (3) | categories: Show-N-Tell Thursday XPages

What to do with Save &amp; Replication conflicts

When customers start developing new Domino applications, the distributed nature of Domino can pose a stumbling block. Suddenly the unheard of replication conflict crops up and wants to be dealt with. A customer recently asked:
" I need to check with you about the Conflict Handling in Lotus Notes application. Default I will set the Conflict Handling to Create Conflicts, but I found my application have create more and more replication or save conflict documents. What can I do for all these replication or save conflict documents, and I found some information in conflict documents is not in original document? How can I prevent the system to generate conflict document?"
Replication Conflict Handling
Before going into details, lets have a closer look how Notes handles its data. There's quite some hierarchy involved:
  1. to replicate two databases need to have the same replica id. The replica id is created when a database is created and only can be changed using the C API (or a wrapper around it). When a NSF is copied on the file system, you actually create a replica (but you wouldn't do that, would you?
  2. Inside a database 2 documents need to have the same document unique id (UNID), which is created from a time stamp at document creation time. The UNID is actually read/write in LotusScript and Java and a certain gentleman can teach you about creative ab use of this capability. In addition in the document properties a sequence number is stored that gets incremented when a document is changed. Together with the last modification date this forms the patented Notes replication.
  3. Inside the document the Notes items are stored. This are not just field values in a schema (like in an RDBMS) but little treasure troves of information. An item has a name, an array of values, a data type, an actual length and a sequence number. Notes can (and does) use this sequence number to see what items have been altered (note the difference: a form contains fields, a document contains items)
So how do the form options behave for conflicts (which are stored as $ConflictAction item in the document)? First Notes determines a "winner" and a "loser" document. The winner is the most edited document. If there same amount of edits, only then the document saved last wins (savour this: an older document still can be a winner). Once the winner is determined, the conflict resolution is executed:
  • Create conflicts (no $ConflictAction item)
    The "loser" document is converted into a response document of the winner document and an item $Conflict is created. The conflicts are shown in in views unless excluded by view selection formula (& !@isAvailable($Conflict)). Conflict resolution is manual (an agent you write is considered manual too)
  • Merge conflicts ($ConflictAction = "1")
    If a document has been edited concurrently but different field have been altered, then they are merged into the one document and no conflict is created. If the same fields are altered a conflict is still generated.
    Sounds good? In practise I often see this fail, when true distributed edits by users are the conflict cause, since applications habitually contain a field "LastEditedBy" with @UserName as formula - a lesson to be learned when designing distributed apps: update only what is really necessary
  • Merge/No Conflicts ($ConflictAction = "3")
    Same as above: if different fields have been altered, then they are merged. If the same fields were altered the loser document is silently discarded. One could argue: why not merge at least the different fields. But that would create rather a data mess
  • No Conflicts ($ConflictAction = "2")
    The radical solution: the winner takes it all, the loser disappears and nobody will ever know. I haven't seen a good use case for that, but the world is big
So what to do about them? First you need to have clarity: They are called "Replication and Save" conflicts. So they can also happen on the same server. Some pointers how to prevent them:
  • Using document locking prevents them when edits happen on the same server
  • Also make sure, that you scheduled agent don't run on two servers concurrently
  • A nice rookie mistake is to use NotesDocument.save(...) in a querySave (or postSave without closing the form) event - Domino will (or has) saved the document, get out of its way
  • Recheck your application flow: Can you limit the current allowed editor using Author fields/items? Quite often removing access in querySave combined with an agent altering access "on documents changed" makes a lot of sense
  • Check your forms: are there fields that are "computed" and changed by agents? You could/should set them to "computed when composed"
  • Avoid NotesDocument.computeWithForm(..) in agents unless you are sure it isn't the source of your conflict
  • If your business logic mandates that you have multiple concurrent edits, consider to implement the inversion of logging pattern (with XPages you can make that real snappy)
  • last not least: replicate more often or consider to cluster your servers
As usual YMMV

Posted by on 02 January 2013 | Comments (6) | categories: Show-N-Tell Thursday