Looking at this beautiful Spanish scenery, just before a nice early-morning dive into the swimming pool, a perfect moment to reply to your follow-up post on automated testing, Jan.
Thanks for taking the time to answer my questions, Luc! I’ve definitely learned a thing or two, and I’m glad that we apparently agree about most of this stuff. Just a few last responses below…
Good to hear, and thanx too as it it allowed me to look more conscious at this topic.
About #1. “what to test”
Like I said, my first few automated tests were unit tests testing the results of validations. That seemed like a good idea at the time, because it allowed the tests to a. be very limited in scope, and b. be very isolated from each other.
Would you consider making separate test functions for each (relevant, i.e., sufficiently complex) field validation, like you probably would for each (relevant) function, or is that the wrong scale as far as you are concerned?
My first thought is, that at this very moment it is too far off what we (still) need to achieve in our organisation, and, IMHO, also in the NAV world in general: to get automated tests in place that enable us to get a quick understanding of the current state of our code with respect to existing functionality (i.e. have regression testing in place). With this I keep thinking, but maybe I am too stubborn in that, that this can best/the most easiest be achieved by keeping close to what we are used to in the NAV world: perform/create functional/integration tests. This is mainly what the MS Test Automation Suite entails and as such is a fairly easy start.
On second thought I agree with your approach. And this is what, in the end, with respect to automated testing, developers should do: write unit tests to their app code. Having said that I realize I still will push my team first to get the integration tests in place.
About #2. “developing for testability”
One of the things I’m trying to do in my development work is to use expressions instead of statements as much as possible. I want my functions to be as pure as they can practically be, i.e., fully deterministic and without observable side-effects, in order to optimise their testability.
Fully agree, even though this is not an easy thing to achieve in NAV due to the habits we have grown in the NAV world. But yes, this makes code easier to test.
About #3. “predefined test data”
I’m not sure if I fully understand when you say that your test data baseline should be 100% stable and known, but you are using CRONUS data? We have no real way of knowing what changes Microsoft makes to the demo data between releases, do we? Wouldn’t you be better off generating all of your own data in a new NAV company? Or is it just a trade-off between effort and security?
You're fully right in all aspects. But practically, in our current situation, our daily test run has a 100% identical data baseline as long as we haven't moved to another version of CRONUS. And … even when we move to a new version … as long as our tests still prove to be successful, I will call it stable. If the contrary happens I will start considering "generating … data in a new NAV company".
The rest of your replies make perfect sense to me.