wissel.net

Usability - Productivity - Business - The web - Singapore & Twins

By Date: February 2019

Draining the happy soup - Part 2


We stormed ahead in Part 1 and downloaded all the meta data in SFDX format. Now it's time to stop for a moment and ask: what's the plan?

You need a plan

When embarking on the SFDX package journey, the start is Phase 0. You have an org that contains all your meta data and zero or more (managed) packages from AppExchange. That's the swamp you want to drain.

Phase 0 - happy soup

Before you move to phase 1, you need to be clear how you want to structure your packages. High level could look like this:

Structure - happy soup

  1. You have an unpackaged base, that over time will shrink. The interesting challenge is to deal with dependencies there
  2. Some of the components will be used across all system - most likely extensions to standard objects or triggers and utility classes. Core LWC components are good candidates for base packages too. There can be more than one base package
  3. Your business components. Slice them by business function, country specifics or business unit. Most likely will resemble some of your organization structure
  4. A package from AppExchange or a legacy package will not depend on anything. In my current project we moved all VisualForce stuff (pages and controllers) there, since these won't be needed after the lightning migration is concluded and then can be uninstalled easily.

Read more

Posted by on 18 February 2019 | Comments (0) | categories: Salesforce SFDX

The Efficiency Paradox


A common setup in many organizations is to outsource development and/or operation to a system integrator. For agile organizations that can post a challenge. A key is skillfulness - how fast and good can it be implemented?

Does your System Integrator invest in efficiency?

Competition is supposed to keep cost at bay, however customer relation and familiarity with the environment (In Dreamland everything is documented) pose a substantial barrier to entry. A barrier to entry will enable an incumbent vendor to charge more.

So an engagement manager might see him/herself confronted with an interesting dynamic.

Feedback loop for efficiency

There are a slow and a fast loop running concurrently. Depending on the planning horizon, the engagement manager might not see the outer loop to the detriment of all participants. Let me walk you through:

  1. Investment in better tools or skills leads to improved efficiency. Work is delivered faster, closer to actual requirements and with less defects
  2. In the short run this leads to a reduction in hours sold (bad for time and material contracts)
  3. A reduction in hours sold leads to reduced profitability since you have more resources sitting on the bench

    In conclusion: As long as the barrier to entry protects you, investing in efficiency is bad for the bottom line. So investment in efficiency should only be made to keep the barrier to entry high enough (Add you own sarcasm tag here). However there's a longer running loop in motion:
  4. Improved efficiency leads to better quality and shorter delivery time. Work is done fast and good (which might justify higher charges per hour)

  5. Getting good quality soon leads to an increase in customer satisfaction. Who doesn't like swift and sure delivery
  6. Happy customers, especially when delivery times are short, will find an endless stream (only throttled by budget) of additional requirement to implement
  7. Having more and more new requirements coming in, keeps people off the bench and keeps utilization high. High utilization is the base of service profitability
  8. Investment in efficiency is justified

This is a nice example of a Systems Thinking Feedback Loop. Conclusions vary on observed time frames.


Posted by on 18 February 2019 | Comments (0) | categories: Salesforce Singapore

Draining the happy soup - Part 1


Unleashing unlocked packages promises to reduce risk, improve agility and drive home the full benefits of SFDX

Some planning required

I'm following the approach "Throw and see what sticks to the wall". The rough idea: retrieve all meta data, convert it into SFDX format, distribute it over a number of packages and put it back together.

To make it more fun I picked an heavily abused customized and used org with more than 20,000 meta data artifacts (and a few surprises). Follow along.

Learning

Trailhead has a module on unlocked packages on its trail Get Started with Salesforce DX.

While you are there, check out the (at time of writing the 15) modules on Application Lifecycle Management.

Downloading

The limits for retrieving packages (10,000 elements, 39MB zip or about 400 MB raw) posed an issue for my XL org. So I used, growing fond of it, PackageBuilder to download all sources. It automatically creates multiple package.xml files when you exceed the limits.


Read more

Posted by on 14 February 2019 | Comments (0) | categories: Salesforce SFDX

Reporting your validation formulas


Validation formula are a convenient way to ensure your data integrity. With great powers? comes the risk of alienating users by preventing them entering data.

Why look at them?

You can easily look at all formula in the Object Manager, but it is tedious to look at every formula one by one. You might ask yourself:

  • Do all my formula exclude the integration profile?
  • Are context (e.g. the country) specific formulas set correctly?
  • Do validation rules follow the naming conventions?
  • Are messages helpful or intimidating?

Extract and report

You already use PackageBuilder to extract objects (and other stuff) as XML, so it is just a small step: slap all *.object files into one big file and run an XSLT report over it.

Not so fast! If you concatenate XML files using OS copy you end up with three problems:

  • You don't have an XML root element. Like the Highlander - there can be only one. You could sandwich the files in opening and closing tags, but then you have the next problem
  • XML files start <?xml version="1.0" encoding="UTF-8"?> and copying that file will sprinkle that statement multiple times into your result. The XSLT processor will barf
  • The result will get very big and any report will take a long time or even run out of memory

A bit of tooling

I solved, for my needs, using a small Java class and one XSLT stylesheet. Java because: I'm familiar with it and NodeJS still sucks with XML. XSLT, because: I'm familiar with it (heard that before?) and the styling of the output is independent from the processing step. I presume you know how to initiate an XSLT 2.0 transformation.


Read more

Posted by on 07 February 2019 | Comments (0) | categories: Salesforce