Adventures in TDD
There are two challenges getting into TDD:
- Why should I test upfront when I know it fails (there's this massive aversion of failure in my part of the world)?
- Setting up the whole thing.
I made peace with the first requirement using a very large monitor and a split screen, writing code and test on parallel, deviating from the ?pure teachings' for the comfort of my workflow.
The second part is trickier, There are so many moving parts. This post documents some of the insights.
TDD has the idea that you create your test first and only write code until your test passes. Then you write another failing test and start over writing code.
testin your package.json you can run any time. For a connoisseur there are tools like WallabyJS or VSCode Mocha Sidebar that run your tests as you type and/or save. The tricky part is: what testing libraries (more on that below) to use?
- In Java Maven has a default goal
validateand junit is the gold standard for tests. For automated continuous IDE testing there is Infinitest
Automated testing, after a commit to Github, GitLab or BitBucket happens once you configure a pipeline as a hook into the repository and have tests specified the pipeline can pick up. Luckily your maven and npm scripts will most likely work as a starting point.
The bigger challenge is the orchestration of various services like static testing, dependency management and reporting (and good luck if your infra guys claim, they could setup and run everything inhouse).
Some of the selections available:
- Repository: Github, GitLab or BitBucket
- Pipeline: Heroku Flow, BitBucket, Travis, Jenkins, CodeShip, CircleCI and more of them
- Testing and Reporting service: CodeClimate, SauceLab, Codacy, Coveralls, Snyk (for vulnerabilities), GreenKeeper (for dependency management), or many more - some run extra tests, some report on tests that ran in your pipeline
There are multiple dimension you want to test. Not all of them at each moment in the development workflow. The rule of thumb: move from microtests (testing the code in the current editor focus) to ?all bases covered? in your release branch. The closer to a release, the higher coverage needs to be.
- Test that run without backend data
- Unit tests: Does your code run without errors
- Assertions: Does your code compute correctly
- Coverage: Is your code sufficiently covered with tests
- Quality/Style: Is your code understandable and can it be maintained
- Tests that need backend data
- Component tests: Do component correctly receive and return data
- API tests: Do the APIs react as
- UI tests: Do interactions work on all targeted devices
- Load tests: Does the environment scale
The later is the hardest: your test environment most likely is setup differently than production, so a performance test, not run in production, doesn't tell you that much.
I can say ?all of the above? and I do have my favorites, some out of convenience, some out of conviction:
- Version Control
- Bitbucket for my private repositories
- GitHub for the public (not moving)
- CI: No clear favorite. I like the Bitbucket and Heroku pipes, since they are integrated into source or target environment. I used the others too. Fond of Travis (my German bias ;-) )
- Maven/JUnit for Java
- Chai for test in Postman
- Lightning Testing Service for Salesforce Lightning components
- Documentation / Paperwork
As usual: YMMV