Usability - Productivity - Business - The web - Singapore & Twins

By Date: September 2011

The 3 P of Performance: Passion, Professionalism and Persistence

Every corporation celebrates their heroes (and enlightened ones mourn their losses too). I've been made Hero of the day in the quarterly GMU Breakaway Star recognition for the inaugural Q2/2011 (GMU is IBM's TLA for "Growth Market Unit", which means: the world excluding US, Europe and Japan: Dear Stephan
Congratulations on your recent selection as one of the GMU Breakaway Stars in recognition of your outstanding contribution to our Software business in 2Q!
The GMU Breakaway Stars program recognizes high performers who have achieved extraordinary results for the business and have demonstrated their understanding of the client's business, their ability to integrate IBM in front of the clients and their passion to drive progress for clients, for IBM, and for themselves.
  As a GMU Breakaway Star, you have exemplified the quality of IBMers at their best.  I am pleased to see your commitment to excellence and IBM values; and your dedication to create differentiation and higher value for our clients.  This is the quality which will differentiate IBM in the marketplace and position us to achieve our 2015 roadmap.
  As we continue to deliver growth for the business, I hope you will contribute the same level of focus and commitment that you displayed to help position us as the 'Best Partner of Choice' for our clients.
  Thank you once again for your exceptional achievements in 2Q.  Keep up the great work!
I got featured on the IBM Intranet (which I can't share) and interviewed. I would title that interview: "The 3 P of Performance: Passion, Professionalism and Persistence"
The 3POfPerformance.jpeg
The little flags on the right side together with the words in caps point to IBM's core values: Success, Innovation and Trust. It is always fun and rewarding to tie actions back to the stated core values, they are everywhere in danger to bet let out of sight in the heat of the battle.
Carrot accepted, now back to the stick of quarterly numbers.

Posted by on 29 September 2011 | Comments (7) | categories: IBM

Apples and Oranges <strike>Can I have 1785$ per user for messaging too please?</strike>

Update: I promise: no more mental math at 1am in the morning. Post revised.
IBM invented FUD, but Microsoft turned it into an art. On their website they have an entertaining set of claims. I leave it up to you to judge their credibility. The real fun part is the Godiva case study. According to Microsoft the 1400 people company saved $250,000 per year by moving to Microsoft. That would turn into $178 per user/year. Of course that calculation is riddled with question marks. Lets remove the Oranges and compare Bananas with Bananas. There are 2 moves: one is from "I-run-my-own-server" to "happy-in-the-cloud", the other is switching products. So lets have a look:
  • When comparing deployment diagrams Exchange looks rather complex. The license stack doesn't look better. That might be one of the driving reasons for Microsoft to offer cloud. But what I wonder is how could they spend so much on running Domino? Back in 2002 Ferris conducted a study about Domino TCO. They found R5 would typically cost $22 user/month, while R6 would push that to $6 user/month. In R7 Ferris calculated additional 15% savings, with additional improvements with Notes 8.0 and Notes 8.5, especially around storage (DAOS), policies and monitoring. I checked with people who run efficient Domino installations and they run Domino, Sametime, Quickr, Protector at running cost of $6-7 user/month. So the number of products has increased and probablt the mailbox sizes, but no the cost. That leaves quite some money for licenses and hardware
  • But we are going cloud. So we compare to LotusLive Notes. The site says it is $5 user/month list price (or $3 if web only mail will do - like the factory floor workers). But let us us do more. Let's add file sharing (that also works with outside parties) and collaboration like IBM Activities or IBM Communities and Symphony live. This pushes the list price to $7 user/month. Finally we throw in web conferencing that allows to invite external guests for free. We end up at $10 user/month. That makes $120 user/year.
So when I shall save $178 user/year someone needs to pay me $58. (I know it isn't accurate, since there are people and servers etc. involved. But moving to LotusLive Notes is pretty much one cross cert, a few policies and then replication. So no cost for engaging and licensing BinaryTree). In German we call such claims " Milchm?dchenrechnung". And we haven't looked at the apps yet.
Readying the flame proof underwear.
Update Thx for all the comments. Its good to know that there is some readership

Posted by on 28 September 2011 | Comments (9) | categories: IBM Notes Lotus Notes

Replace IE with XULRunner in Notes Client 8.5.3 on Windows

This is just in from the don't-try-this-at-home-since-it-is-unsupported you-have-to-dig-really-deep-to-find-it department. The embedded browser in the Notes client uses an operating system dependent default engine: Internet Explorer on Windows and Firefox (or to be more precise: XULRunner) on Linux and Mac. That cuts one out on Windows from the progress made in Firefox. Luckily our favorite place to alter configuration settings for the Notes client [notesprogramdir]/framework/rcp/plugin_customization.ini got a set of new settings:

The first one switches the embedded browser to Mozilla, the second one does that for the Mime rendering engine for eMails. I haven't seen a setting that would work on the display of a widget (the creation is done using XULRunner already).
Update: I stand corrected. The first setting is documented on a public URL in the Expeditor documentation and for the second one it could be deducted from the Interface spec. So it is just hidden in plain sight. The documentation suggest it would work in all R8.5 versions, someone could give it a try.
Update 2: My colleague Thomas Hampel remarked, that you can use Domino policies to push out these settings. Makes it very easy to handle, no fiddling with editors by end users required.

Posted by on 27 September 2011 | Comments (b) | categories: Show-N-Tell Thursday

Large scale workflow application performance using Push Replication

Imagine the following scenario (live from a very large customer of mine):
A workflow application on a central server has at any time 500-800k of active documents. Any normal user would have access to about 50 of them, while an approver typically sees 1000-1500. Using a Notes client Domino will be mainly busy not to display a record (even if you follow my advise). Contrast that with a local replica of the same application. Since documents a user can't see are not replicated these local replicas would be tiny in comparison and offer a beautiful user experience. The only catch: if you work on a local replica you most likely will screw up notifications and an approver will get a request before the document is in her local replica. The sequence that needs to be followed looks like this:
8 steps of local workflow
  1. User creates a new request in a local workflow database and submits it
  2. Local replica replicates back to the server
  3. Approver gets notified that new data is waiting
  4. New data is replicated from the server to the client
  5. The approver makes a decision and submits it
  6. Data is replicated back to the server
  7. Requester is notified on updated data
  8. Data is replicated from server back to the requester's workstation
It is easy to see why workflow databases are hardly exist as local replicas. Replication as background process typically runs on schedule and doesn't tell when it is finished (other than in the replicator page). There is no trigger to tell a local database: now it is time to fetch. But if it was different? What if the requester would only do step 1 and 2-4 happens automagically? If the approver would get the notification after the data has arrived in step 4? If the approver does only step 5 and steps 6-8 also happen automagically, with the notification after local data has arrived?
This is exactly what Dragon Li from our Beijing Lab and I are working on. The prototype runs quite beautifully but currently requires both users to be online. We are using machine to machine notification, so the automatic steps can be completed in the background without disturbing the users before they get notified. The hooks for a notification persistence are ready and just need to be implemented. The beautiy of this implementation: we use the time tested replication, just we trigger it differently. No new protocol or emerging unratified standard is used. The application works through an innovative combination what is there in the Notes client for quite a while already. Pending our internal process this will hit OpenNTF soon.

Posted by on 26 September 2011 | Comments (1) | categories: Show-N-Tell Thursday

Practise your speaking skills

Public speaking is as much an art as it is a craft. Every craftsman can tell you that besides skills the right tools and practise lead to mastery. To practise speaking do that often. When your job is short of opportunities, join ToastMasters and practise there. When it comes to the tools and the knowledge about them, there is much confusion in our guild. Certainly presentation software is less a tool but a hazardous substance ( the dose makes it toxic even if it is really fancy). The real tool is the proper presentation of (the right dose of) conclusive arguments in a concise manner. One of the best guides you can find is the eBook The Contrary Public Speaker: A Break-the-Rules Approach to Breakthough Presentations. Good speeches and presentation owe their success much more to proper preparation than to the talent of the speaker. It is a lot of work (but also fun) to prepare. Check the resources section of Lee Aundra's site and practise the warm-up test found there:
  • One hen,
  • Two ducks,
  • Three squawking geese,
  • Four limerick oysters,
  • Five corpulent porpoises,
  • Six pairs of Don Alberso's tweezers,
  • Seven thousand Macedonians in full battle array,
  • Eight brass monkeys from the ancient, sacred crypts of Egypt,
  • Nine apathetic, sympathetic, diabetic old men on roller skates with a marked propensity to procrastination and sloth,
  • Ten lyrical, spherical diabolical denizens of the deep who haul, crawl, around the corner of the quo of the queasy at the very same time.
This is the " The Announcer???s Test", seveloped by Radio Central in the 1940's to test a new announcer's reading ability, this is a fantastic exercise to warm up your speech muscles before that key event!

Posted by on 20 September 2011 | Comments (0) | categories: Business

Less passwords, more security. ssh connections with certificates

Succeful server administration depends on automation. Only when you can declare Runs-in-AutoPilot-mode??? your servers will run cost efficient. While DDM, Activity Trends or Domino Policies can do that for you on the Domino level (you might want to have a look at more tools and utilities), there are times where you need to automate OS level tasks (If you don't promise to never ever use this to FTP a NSF, stop reading now and go away) like moving installer files or start and stop remote services. Once you start scripting them you will run into the issue of remote authentication. For SSH connections there is a very elegant way to have a secure connection using a public private key pair. Let's presume our remote host name is everest at everest.company.com and your user id there is joeadmin. These are the steps:
  1. Create a directory to keep your keys:
    mkdir ~/.sshkeys
    chmod 700 ~/.sshkeys
    cd ~/.sshkeys

    (the chmod isn't strictly necessary, but we want to make sure that access to the key files is minimal)
  2. Generate a key pair:
    ssh-keygen -t dsa -b 1024 -f ~/.sshkeys/everest-access-key
    For automation without a password you need to press Enter twice. Be aware, that the security of access it as strong or as weak as the access protection of your workstation. So you should use strong disk encryption
  3. Protect the generated file: chmod 600 everest-access-key
  4. Copy the public file to the remote server:scp everest-access-key.pub joeadmin@everest.company.com:/home/joeadmin
  5. Login to the server: ssh joeadmin@everest.company.com (This will be the last time your need the password)
  6. Create the directory for your keys:
    mkdir ~/.ssh
    chmod 700 ~/.ssh
    cd ~/.ssh
  7. Create your key file to recognize you:
    touch authorized_keys
    cat ~/everest-access-key.pub >> authorized_keys
    rm ~/everest-access-key.pub
    chmod 600 authorized_keys
  8. Now logout and you are ready to use the key driven access
  9. To login use: ssh -i ~/.sshkeys/everest-access-key joeadmin@everest.company.com  (which of course you use in a script)
Once all admins who need access to that server have their keys in place it is time to lock ssh down. Edit the file /etc/ssh/sshd_config and make sure the following values are set:
  1. ListenAddress {your IP/IPv6} to limit SSH to one IP address (remember your servers most likely will have more than one IP)
  2. LoginGraceTime 10 since all logins will directly use a key pair, 2 min grace period is way to long
  3. PubkeyAuthentication yes so your keys will work
  4. PasswordAuthentication no so nobody can try to hack in using a password attack
  5. Installing the denyhosts package (sudo apt-get install denyhosts) reduced the attack surface further. Go read the full explanations
Also works great for zLinux or AIX. As usual YMMV

Posted by on 13 September 2011 | Comments (2) | categories: Linux

Designing data sources - square pegs into round holes

Stephen Mitchel translated the end of verse 21 of the Dao De Ching ( ????????????????????????????????????= wu he yi zhi zhong fu zhi zhuang zai? yi ci.) as " How do I know this is true? I look inside myself and see.". Lao zhu could have been a software platform or framework architect . Getting the gist of a platform often requires a zen like approach. Unfortunately in the heat of delivery pressures things get lost and we end up with "Frankenworks" instead of "Frameworks", where functionality works, but feels rather "bolted on".
Currently there is work underway to make XPages a first class RDBMS front-end. The data source looks very promising, nevertheless prompted me to reflect on the nature of data access, like I mused about structures before. There are some structural differences between a document centric approach (Domino, XMLDB, XForms, ObjectDB, JsonDB etc) and a relational database. Doing justice to both sides poses a formidable challenge:
  • The nature of a relational database is a flat set. A set of rows and columns that get created, read, updated and deleted. It is always about a set, that more than often can yield from more than one table. There is no such thing as a single record, it is just a set with one member. All OR-Mappers struggle with this
  • The nature of Domino the the document. Data is stored in documents. Collections (views/folders/search/all) are designed to get access to a document (set). The result of this nature is a dual access to data: there is the collection which is read only (and can be flat or hierarchical) and there is the document which is read/write where data changes happen.
  • The document has a predefined set of meta data absent from a relational table: ID, access control, various dates, hierarchy (isResponse) etc. One could add those to an individual database schema, but they can't be taken for granted in RDBMS (a story for another time: designing a RDBMS schema to work well with XPages)
  • The document sports structured data. In Domino that are multi-value fields, in other NoSQL databases these structures can be more complex. In RDBMS these structures are splattered across multiple tables and pulled back together with JOIN statements. This makes it easy to run reports or do mass-updates, but makes transporting a logical entity from one database to another a pain
  • The dominating clause in RDBMS is WHERE which is needed for all operations including updates, while Domino acts on the current document (doc.save)
  • The document is closely connected to the Notes/Domino event model. Both XPages and the classic Notes client (and to a lesser extend classic Domino) offer rich data events:queryNewDocument, queryOpenDocument, postOpenDocument, querySaveDocument, postSaveDocument etc.
  • SQL doesn't provide an event model, but the various RDBMS implementation provide triggers that serve a similar purpose that run stored procedures (and are mostly written in incompatible flavors of SQL - check SwissQL for translating them). I'm sure about INSERT, UPDATE and DELETE triggers as equivalent to query/post save events. I haven't looked for a while, but last time I checked SELECT wouldn't trigger a stored procedure, but you could call one directly.
  • The splattering of data across tables in RDBMS led naturally to another capability of relational databases, that comes in handy for large manipulations too: transactional integrity. If all you need is saving one document, there is no imminent need for a transaction mechanism, distributing data over multiple (parent, child) tables however mandates an integrity protection
Interestingly a document oriented data model is closer to the real world's (read business users') perception of data: a contract is a document, as is a purchase order as is a bank note. Tables usually serve as (table of sic.) content listing for documents or detail listings inside documents.
So what does that mean for the design of additional data sources in Domino? There are two possible approaches which luckily can co-exist since they are not mutually exclusive: follow the nature of Domino or follow the nature of the source (not the nature of the force, that's for others). The current OpenNTF extlib approach is the later: it is designed around the relational feature set.
Going forward I would like to see data sources that build on the duality of the Domino data access: the read-only collection and the read/write document.
Datasources and the API
  • Each data source will have 2 elements: a read-only collection and a read/write entry/record/document
  • These sources have the same method signatures as DominoDocument and DominoView. So in a design a developer could swap them out for each other. MyRDBMSSource.getItemValueString("Location") would work the same way as DominoDocument.getItemValueString("Location"). For an RDBMS developer that might look a little strange, but only caries a one time learning affordance, greatly outweighted by the benefit of swappable sources. Of course the parameters would be rather different. In a RDBMS source there probably would be a parameter to define what getDocumentUniqueID would return.
  • All the document events would fire with every data source
  • Data sources can implement additional tags to offer access matching their nature
What data sources can I imagine? This is not a revelation of IBM's plans, I rather expect some of them being provided by the community or as commercial business partner offering:
  • enhanced JDBC source following the Domino pattern
  • Domino data source encapsulating the inversion of logging pattern
  • DB/2 PureXML data source. It would use the standard JDBC approach for the collections and PureXML to read/write document data. It would implement the spirit of NSFDB2 without the constraints of replicating all NSF features (data only)
  • Sharepoint. One could build Sharepoint front-ends that survive a Sharepoint upgrade without the need to rewrite them
  • IBM MQ
  • Web services (take a WSDL and make a form)
  • CouchDB
  • 3270 Terminal / IBM HATS
  • HTML5 storage
What's your imagination? (need help?)

Posted by on 13 September 2011 | Comments (1) | categories: XPages

Scaling XPages across servers

In China everything is bigger. The wall is longer then elsewhere and companies, catering to 1.3 billion people have a lot of employees. I'm currently working with our experts from the China development lab to figure out, based on discussions with customers and business partners, how to scale XPages when one server is not enough.
When it comes to architecture, opinions are a dime a dozend. We had "experts" chipping in who would happily split XPages server and NSF server into two introducing network latency between the two. Others are convinced, that only RDBMS is the real thing. Since Domino 8.5.3 happily connects to an RDBMS, the question became more interesting (not that we didn't have options before). As usual the devil is in the details, which in case of Domino would be Multi-Value fields and Reader and Author protection.
We are planning to have 2 servers and test them in various configurations to see whar are the performance ramifications:
  1. XPages and NSF Server

    XPages and NSF Server
  2. XPages Cluster

    XPages Cluster
  3. XPages and RDBMS

    XPages and RDBMS
It might take a while to get results, and I'm very curious which opinion withstands the bright light of evidence. Of course just 2 servers won't deliver evidence for a whole farm, but it is a start.

Posted by on 08 September 2011 | Comments (3) | categories: XPages