Well, actually, Jan, everything you always wanted to know about automated testing in NAV and did ask me about. 😉
Thanx for starting this creative little experiment, allowing to elaborate somewhat more on some aspects of automated testing. Maybe needless to mention that my answers reflect my knowledge and experience and are in no way the one-and-only truth. And BTW: my Dutch Dynamics session had a very restricted scope being the way we started using the NAV Test Automation Suite MS is providing us with every release.
So let's go … I will state your questions
When I first started writing automated tests, I found myself testing things that were so obvious that they probably never posed any risk to the stability of the application in the first place. Can you give some rule of thumbs for what to focus our test effort on?
Wow, this already is a comprehensive question, as it partly depends on the context. Are you wanting to write tests while your are developing a new feature, or for an already existing one?
Let's approach them apart (even though they still have a lot in common).
To be honest I do not have a standard list on this. But my approach would be (without going into too many details):
- Tests that prove the feature is doing what is designed for.
Note that in general these tests are highly focused on the happy path, so we need to make sure …
- To test if the feature gracefully dies. Meaning whether the feature handles exceptions right.
This is often overlooked, and is maybe more important to get right.
A good tester is someone who has a second nature in getting this done; who takes pride in trying to break the feature by especially challenging this side of the feature.
If time does not suffice to get this done for each new feature, focus on the business critical ones. Those that:
- are frequently used
- have high impact on the data, especially general ledger and alike
- have been very buggy before
Now having written this: this actually would be my approach also with … Existing Features. In the NAV world, we probably all share the same feeling questioning where to start in the humongous sea of features; this heritage of code we built up ever since we started with it. Start to write tests, that will help the most to improve and guard the quality of the code.
You mentioned that testing NAV (ERP?) is different from testing most other systems, since practically everything goes through the database and there’s no easily available way to mock (simulate) this database interaction. Do the developers in your team have testability in mind when they are writing new features?
Well … I would like to say, yes. But, no, that's unfortunately not the case and this has all to do with where we come from in the NAV world. Developing without:
- design patterns
- code reviews
- structured testing
- source code management
And yes, as in the NAV world in general, we are improving on this and testability is part of that, but one somewhat more up the road. Now that we started to use the standard Test Automation Suite, testability becomes slowly part of our vocabulary as we do run into parts of our code that we have a hard time getting it tested easily.
You mentioned that tests should ideally create (and clean up) their own data, returning the database to its pristine state after all the tests have run. In our experience, being overly strict about that costs time twice – once during test development, and during each test run. How do you feel about isolating some of the data creation in a demo data creation tool, and running your tests in a database that already has that generated data on board.
Maybe I should mention that the basis to this is that each test should be run from the same baseline to make it reproducible. The baseline in our case is CRONUS, as provided be MS on the product DVD; and what MS also uses as their baseline. Adding your own data that, creates another baseline, so why not? Make sure that in between each test codeunit the state is this baseline.
BTW: an additional reason why we are using CRONUS is that running the Test Automation Suite on a copy of our production database (approx. 600 GB) never ended. It seemed to get stuck on the big number of item ledger entries. However, I never did spend deep investigation in getting to know the exact reason as the CRONUS baseline suffices very well.
I guess that could significantly reduce the execution time, right? And that becomes even more relevant e.g. when you want to do some form of gated check-in, where tests must pass before a changeset is accepted into your code repository?
Also, running in parallel forces you to make your tests fully independent of each other – as they should be.
Nope. As we're still in the phase of getting all the standard tests working, this hasn't been our focus. But sure we will in due time, like we will also improve they way we are creating our test data now.
In my experience, designing your tests in a code editor leads to the worst results. I think it’s best to formalise your (existing, manual) tests, i.e. listing the steps and verifications, in a text editor, in plain English before converting them to code. Would you agree?
Fully agree. No pin between (geen speld tussen te krijgen ;-).
Only having access to fields that are visible from the GUI can be quite limiting – there is no straightforward way to get e.g. the Line No. from a Sales Line. Any advice on that (apart from using unit tests instead)?
A field like Line No., which, as a design pattern, should no be placed in the GUI, is actually not the only issue. Any field that has Visible=FALSE has a similar issue. For the latter I have asked to allow us to change it's visibility from code. I think something alike should be asked regarding fields like Line No.: to be allowed to access fields not available in the GUI, like you would use the About this Page feature.
7. You mentioned the other day some strange differences between running the test suite from the Windows client, and running it ‘headlessly’ from PowerShell.
Can you elaborate a little on that? Did you manage to solve that issue?
The standard Test Automation Suite has a number of tests (like running reports) that needs a Client session. Running it 'headlessly' from PowerShell yields the error:
Another one has been scheduled: Testability Framework Deep Dive. Date & Time : July 6th. 10am CET & 4pm EST. (You can find the recording here.)
And during the next NAV TechDays I will lead two pre-conference workshops and co-lead a conference session on automated testing.
So get yourself informed …!