wissel.net

Usability - Productivity - Business - The web - Singapore & Twins

Finding Strings in recursively zipped files


I had an itch to scratch. After using Field Trip (which I like a lot) to determine unused fields, the team managing the external Informatica integration claimed they would need weeks to ensure none of the fields are used in any of their (hundreds) of pipelines.

ZIP inception

My first reaction (OK, the second, first one isn't PC) was: Let's go after the source code and just use an editor of choice to do a find in files. Turns out: not so fast. The source export offered by the team was a zip file with an elaborate directory structure containing, tada, zip files. So each of the pipes would need multiple zip operations.

Itch defined

I needed a tool that would start in a directory with a bunch of zip files, unpack them all. Check for zip files in the unpacked result, unzip these and repeat. Once done, take a list of strings and search for occurrences of those and generate a report which shows the files containing these strings

Itch scratched

I created findstring, a command line tool that takes a directory as starting point unzips what can be unzipped (optional) and searches for the occurrence of strings provided in a text file.

Initially I contemplated to render the output as XML, so the final report could be designed in whatever fashion using XSLT. However following KISS, I ended up using Markdown. I might add the XML option later on.

Recursion

The key piece of the tool is recursion (until you stack overflow ;-) ). Reading a directory and dive into directories found. I could have avoided that using Guava and its fileTraverser, but I like some Inception style coding. The key piece is this:

    private boolean expandSources(final File sourceDir) throws IOException {
        boolean result = false;
        final File[] allFiles = sourceDir.listFiles();

        for (final File f : allFiles) {
            if (f.isDirectory()) {
                result = result || this.expandSources(f);

            } else if (f.getName().endsWith(".zip")) {
                final String newDirName = f.getAbsolutePath().replace(".zip", "");
                final File newTarget = new File(newDirName);

                // Need to scan the new directory too
                if (this.expandFile(f, newTarget)) {
                    result = result || this.expandSources(newTarget);
                }
            }
        }
        return result;

    }

The function will return true as long as there was a zip file to be unzipped. The string finding operation (case insensitive) follows the same approach

Use cases

  • Find field usage in ZIP files. Works with a package downloaded from the meta data api or what Informatica exports
  • Check a source directory (doesn't need to contain zips) for keywords like TODO, FIXME, XXX

The command line syntax is very simple:

java -jar findString.jar -d directory -s strings [-o output]

  • -d,?dir <arg> directory with all zip files
  • -s,?stringfile <arg> Filename with Strings to search, one per line
  • -o,?output <arg> Output file name for report in MD format
  • -nz,?nz Rerun find operation on a ready unzipped structure - good for alternate finds

Limits

In its current form the utility will check for strings in any file short of zip. Zip gets unpacked and the result checked. When your directory contains binary files (e.g. images) it will still look for the string occurrence inside. File extension filters might be a future enhancement (share your opinion).

Files are read into memory. So if your directory contains huge files, you will blow your heap. Source code files hardly pose an issue, so the approach worked for me. Alternatively a scanner could be used, should the need arise.

Go give it a spin and keep in mind: YMMV


Posted by on 16 March 2019 | Comments (1) | categories: Salesforce Singapore

Testing Aura and LWC in a single Test


You drank the CoolAid and noticed that the Aura framework has been archived. You are hell bend to migrate your components.

Regression Test required

Aura components were testable using the Lightning Testing Service, while Lightning Web Components get tested using lwc-jest. These tests are not compatible.

UI-licious to the rescue. UI-licious is a testing framework for UI tests. They use a simple JavaScript syntax to provide testing and a rather clever addressing of elements. Other than Selenium, they don't rely on CSS selectors or XPath expressions (You still can use those).

To be very clear: A UI level testing library is not a replacement for proper unit testing. UI-licious has two use cases here: top of the pyramid UI testing and spotting UI level regressions. To learn more about the "testing pyramid", check out Martin Fowler's essay.

To give it a try I created 2 components with identical functionality: one in Aura, one as LWC. The components show a dialog where you can pick values for radio buttons. Shi Ling, the CEO provided the test script (the login subroutine omitted for brevity):

I.wait(30) // wait for salesforce to be ready
I.click("App Launcher")
I.click("Clown around")

I.see("Having 2 components of the same type")

test("The aura version")
test("The LWC version")

function test(btn){
  I.click(btn)
  I.see("Pick an Opportunity and Color")
  I.click("Product Opportunity")
  I.click("Red")
  I.click("Select")
  I.see("Nicely done")
}  

Watch the result for yourself:

What I really like: UI-licious builds the collaboration feature around testing, so stakeholders can see any time what's going on. Give them a try!


Posted by on 14 March 2019 | Comments (0) | categories: Lightning Salesforce WebComponents

Navigation in Lightning communities


In a recent project we had to design navigation in a Lightning community. This is what we learned along the way.

Beyond the menu

When you pick a Lightning Template you will have a build in menu navigation. This works well if all menu items are meant for all users (no assignment of audience), but breaks down for more sophisticated or programmatic navigation.

On first view the Lightning navigation service (available in Aura or LWC) seems like the answer. However on inspection of lightning-navigation you find as supported experiences only Lightning Experience and Salesforce mobile app, Communities are missing.

Digging a little deeper and checking the Page Reference Types, you will find "limited support for Communities". I tested it out, here are my findings:

  • The documentation is accurate. What is stated as working works, what is stated as not supported in communities does not work.
  • The painfully missing piece is standard__component which would allow to navigate to a custom lightning component. It is the only component that supports state (more on that later)
  • Navigate to standard__objectPage opens the list/page layout based on the user's profile. When you specify. actionName="new", the standard object detail page will open. It will not use an eventual define new button overwrite
  • Works as specified: standard__recordPage, standard__knowledgeArticlePage
  • Doesn't work: standard__webPage
  • None of the navigation working in communities supports the state properties
  • The most interesting navigation in communities is standard__namedPage. Beside the predefined default pages "Home","Account management", "Contact Support", "Error", "Top Articles" and "Topic Catalog", it supports "Custom Pages". In other words: any of the pages you have created in your community. So the missing standard_component can be mitigated by embedding it into a custom page. Keep in mind: the pageName property is the URL, not the name.

Transferring state

As mentioned above, the state property gets ignored, dropped without an error when used with any of the working navigation items. The remedy for that is to use the session store. An Aura code snippet would looks like this:

function(component, event, helper) {
    event.preventDefault();
    var navService = component.find("navService");
    var pageReference = {
        type: "standard__namedPage",
        attributes: {
            pageName: "some-page-name"
        },
        state: {
            bingo: true,
            answer: 42,
            tango: "double"
        }
    };
    sessionStorage.setItem('localTransfer', JSON.stringify(pageReference.state));
    navService.navigate(pageReference);
}

I left the state in the pageReference JSON object to show that it doesn't harm. The navService component is defined as <lightning:navigation aura:id="navService"/> in Aura. On the receiving end you use:

var localStuff = sessionStorage.getItem('localTransfer');
if (localStuff) {
	var state = JSON.parse(localStuff);
	// Do the needed stuff here
}

As usual YMMV


Posted by on 12 March 2019 | Comments (0) | categories: Lightning Salesforce

Using render() in LWC


Whatever template system you use, you will end up with show/hide logic based on your data's values. In Aura components you have an expression language (reminded me of JSF), in LWC external (in your JavaScript class) boolean values or functions.

Keep it tidy

A common interaction pattern, similar to the Salesforce default behavior when you have more than one record type available, is to show a pre-selection (which record type), a main selection (required data) and (eventually) a post-selection (what's next?).

In a lightning web component you can handle that easily using if:true|false inside your html template.

But what if the sections are quite lengthy? Maintaining the HTML template can get messy. Enter the render() method. In LWC this method doesn't to the actual rendering, but determines what template to use to render the component.

There are a few simple rules:

  • You need to import your template into your JavaScript file
  • The call to render() must return the imported variable (see example below)
  • You can make the computation dependent on anything inside the class
  • You can't assemble the template in memory as a String, it will throw you an error

Read more

Posted by on 04 March 2019 | Comments (0) | categories: Lightning Salesforce WebComponents

Global value providers in LWC


Drinking the new CoolAid one has to come to terms with the old ways. We had a first glimpse before.

Same but different, reloaded

When developing lightning components using the Aura Framework you could use a series of global value providers that give you access to various data sets: $ContentAsset, $Labels, $Locale, and $Resource.

While this convenient, it pollutes the global name space and it a very proprietary (albeit popular at its time) way to provide information. LWC fixes this in a very standard compliant way. This became possible thanks to the new capabilities in the JavaScript ES6 standard.

In LWC all information provided by Salesforce gets added using ES6 import statements from the @salesforce name space. While that syntax is new to Salesforce developers, it is old news for the rest of JavaScript land. So here you go:

  • $ContentAsset -> import assetName from @salesforce/contentAssetUrl/[AssetName]
  • $Labels -> import labelName from @salesforce/label/[LabelName]
  • $Locale -> import i18nproperty from @salesforce/i18n/[internationalizationProperty] (with various values)
  • $Resource -> import resourceName from @salesforce/resourceUrl/[resourceName]
  • current User Id -> import userId from @salesforce/User/Id

The @salesforce name space provides access to additional data, like apex and schema.

As usual YMMV


Posted by on 01 March 2019 | Comments (0) | categories: Lightning Salesforce WebComponents

From Excel to package.xml


Cleaning up an org that has gone through several generations of ownership and objectives is fun. Some tooling helps

Data frugality

A computing principle, very much the anathema to Google and Facebook, is Data Frugality, storing only what you actually need. It is the data equivalent to coders' YAGNI principle. Latest since GDPR it got center stage attention.

Your cleanup plan

So your cleanup exercise has a few steps:

  • Find fields that don't have any data. You can use tools like Field Trip to achieve that
  • Verify that these fields are not "about to be used", but "really obsolete"
  • Add all the fields that did have some data left over, but unused now
  • Add fields that contain data legal told you to get rid off

The absolute standard approach, of any consultant I have encountered, is to fire up an Excel sheet and track all fields in a list, capture insights in the remarks column and have another column that indicates can be deleted Status. Something like Yes,No,Investigating or "Call Paul to clarify". I would be surprised if there's a different approach in the wild (in theory there are).

Excel as source?

In a current project the consultant neatly created one sheet (that's the page, not the file) per object, labeled with the object name, containing rows for all custom fields. Then the team went off to investigate. In result they identified more than one thousand fields to be deleted.

Now to actually get rid of the fields, you could outsource some manual labor to either go into you org or use Copy-Paste to create a destructivechanges.xml package file for use with the Salesforce ANT tool.

In any case: the probability that there will be errors in transferring is approximately 100%. The business owner will point to: I signed off that spreadsheet and not that XML file! Finger pointing commencing.

There must be a better way!


Read more

Posted by on 23 February 2019 | Comments (0) | categories: Salesforce XML

Draining the happy soup - Part 3


In Part 2 we had a look at the plan. Now it is time to put it into motion. Let's setup our project structure

Put some order in your files

Our goal is to distribute happy soup artifacts into packages. In this installment we setup the directory structure for that. Sticking to a clear structure makes it easier to get a step closer to package Nirvana step by step.

Proposed directory structure

Let me run through some of the considerations:

  • I'll keep all packages inside a single directory structure. Name the root after your org. What might pose a challenge is to name it sfdx - too close to that hidden directory .sfdx that does exist in your home directory and might exist in the project directories
  • You could keep the whole tree in a single repository or subject each package directory to its own repository. I'd prefer the later, since it allows a developer to pull only the relevant directories from source control (That's Option B)
  • The base directory, containing the artifacts that won't be packaged shall be named HappySoup. While it is a rather colloquial term, it is well established
  • I'm a little old fashioned when it comes to directory names: no spaces, double byte characters (that includes ?) or special characters
  • You need to pay attention to sfdx-project.json and .sfdx as well as .gitignore. More and that below
  • When you have mixed OS developer communities using Windows, MAC or Linux, directory delimiters could become a headache. My tongue-in-cheek recommendation for Windows would be to use WSL

Key files and directories

Initially you want to divide, but not yet package. So your projects need to know about each other. Higher level packages, that in future will depend on base packages need to know about them and each package needs to know about the HappySoup. To get there I adjust my sfdx-project.json:

{
"packageDirectories" : [
    { "path": "force-app", "default": true},
    { "path" : "../ObjectBase/force-app" },
    { "path" : "../HappySoup/force-app" }
  ],
"namespace": "",
"sfdcLoginUrl" : "https://login.salesforce.com",
"sourceApiVersion": "45.0"
}

The key here are the relative path entries like ../HappySoup/force-app. When you use sfdx force:source:push the content gets pushed to your scratch org, so it is complete. When you use sfdx force:source:pull changes you made are copied down to the default path, so the adjacent projects remain as is.

When using pull and push from VSCode it will use the default user name configured for SFDX. To ensure that you don't push to or pull from the wrong place, you need to create one scratch org each using sfdx force:org:create --f config/project-scratch-def.json -a [ScratchOrgAlias] and then execute sfdx force:config:set defaultusername=[ScratchOrgAlias].

The command will create a .sfdx directory and config files inside in your project. Unless all developers checking out that repository use the same aliases (unlikely), you want to add .sfdx to your .gitignore file.

Now you are all set to move files from the happy soup to future package directories. With the relative path in your sfdx-project.json no packaging is required now and you still can get a fully functioning scratch org.

One pro tip: instead of relying on individual scratch definition files, you might opt to use the one in the happy soup, so all your scratches have the same shape.

Next stop: building the solution before you package. As usual YMMV.


Posted by on 22 February 2019 | Comments (0) | categories: Salesforce SFDX

Draining the happy soup - Part 2


We stormed ahead in Part 1 and downloaded all the meta data in SFDX format. Now it's time to stop for a moment and ask: what's the plan?

You need a plan

When embarking on the SFDX package journey, the start is Phase 0. You have an org that contains all your meta data and zero or more (managed) packages from AppExchange. That's the swamp you want to drain.

Phase 0 - happy soup

Before you move to phase 1, you need to be clear how you want to structure your packages. High level could look like this:

Structure - happy soup

  1. You have an unpackaged base, that over time will shrink. The interesting challenge is to deal with dependencies there
  2. Some of the components will be used across all system - most likely extensions to standard objects or triggers and utility classes. Core LWC components are good candidates for base packages too. There can be more than one base package
  3. Your business components. Slice them by business function, country specifics or business unit. Most likely will resemble some of your organization structure
  4. A package from AppExchange or a legacy package will not depend on anything. In my current project we moved all VisualForce stuff (pages and controllers) there, since these won't be needed after the lightning migration is concluded and then can be uninstalled easily.

Read more

Posted by on 18 February 2019 | Comments (0) | categories: Salesforce SFDX

The Efficiency Paradox


A common setup in many organizations is to outsource development and/or operation to a system integrator. For agile organizations that can post a challenge. A key is skillfulness - how fast and good can it be implemented?

Does your System Integrator invest in efficiency?

Competition is supposed to keep cost at bay, however customer relation and familiarity with the environment (In Dreamland everything is documented) pose a substantial barrier to entry. A barrier to entry will enable an incumbent vendor to charge more.

So an engagement manager might see him/herself confronted with an interesting dynamic.

Feedback loop for efficiency

There are a slow and a fast loop running concurrently. Depending on the planning horizon, the engagement manager might not see the outer loop to the detriment of all participants. Let me walk you through:

  1. Investment in better tools or skills leads to improved efficiency. Work is delivered faster, closer to actual requirements and with less defects
  2. In the short run this leads to a reduction in hours sold (bad for time and material contracts)
  3. A reduction in hours sold leads to reduced profitability since you have more resources sitting on the bench

    In conclusion: As long as the barrier to entry protects you, investing in efficiency is bad for the bottom line. So investment in efficiency should only be made to keep the barrier to entry high enough (Add you own sarcasm tag here). However there's a longer running loop in motion:
  4. Improved efficiency leads to better quality and shorter delivery time. Work is done fast and good (which might justify higher charges per hour)

  5. Getting good quality soon leads to an increase in customer satisfaction. Who doesn't like swift and sure delivery
  6. Happy customers, especially when delivery times are short, will find an endless stream (only throttled by budget) of additional requirement to implement
  7. Having more and more new requirements coming in, keeps people off the bench and keeps utilization high. High utilization is the base of service profitability
  8. Investment in efficiency is justified

This is a nice example of a Systems Thinking Feedback Loop. Conclusions vary on observed time frames.


Posted by on 18 February 2019 | Comments (0) | categories: Salesforce Singapore

Draining the happy soup - Part 1


Unleashing unlocked packages promises to reduce risk, improve agility and drive home the full benefits of SFDX

Some planning required

I'm following the approach "Throw and see what sticks to the wall". The rough idea: retrieve all meta data, convert it into SFDX format, distribute it over a number of packages and put it back together.

To make it more fun I picked an heavily abused customized and used org with more than 20,000 meta data artifacts (and a few surprises). Follow along.

Learning

Trailhead has a module on unlocked packages on its trail Get Started with Salesforce DX.

While you are there, check out the (at time of writing the 15) modules on Application Lifecycle Management.

Downloading

The limits for retrieving packages (10,000 elements, 39MB zip or about 400 MB raw) posed an issue for my XL org. So I used, growing fond of it, PackageBuilder to download all sources. It automatically creates multiple package.xml files when you exceed the limits.


Read more

Posted by on 14 February 2019 | Comments (0) | categories: Salesforce SFDX