During FOSDEM, a person called Spider/Spindel told us about their QA work in a coporate environment, and gave us general principles. This page is a dump of the conversation we had, which was about the general principles and key advice they gave us.

When it comes to functional testing, those are the steps that sometimes get overlooked or done haphazard by developers that know how their tools are supposed to work.
So, the basic is to set up a very simple test plan.

12:07
Take the past 10 releases "new features in this release" from changes.
Don't bother with any older, or other features.
Write them down in the form of:
User is expected to <do this function> .
Fill backwards with requirements to happen before.

12:07
Hand that list of items to someone else, and watch them fail miserably to achieve it.
12:10
So fex.  random feature I know exist:
     "user is expected to bulk rename CSV files to .csv"  
     gives that : 
           prerequisite, a folder containing .CSV files
           prerequisite2,  A folder containing a mix of .CSV and other files
           prerequisite3, A folder containing a mix of .csv and .CSV files with duplicated names
12:11
From that you work backwards to the previous test,  "user is expected to discover how to bulk rename files". 
12:12
My recommendation is to keep the test plan as Basic as possible. There are plenty of tools out there to make writing plans more organized and so on, but they all move the infrastructure out of somewhere maintained.
Start off with a single text-file in the root of the repo that contains your test steps.

12:12
After that, maybe migrate to a spreadsheet or wiki but the more steps you go from the source, the more maintenance burden you add in the future.

12:15
Many of these tests can be automated at a later point, OpenQA etc. etc. can contain that, but you still want a usable, readable test-plan  that you can step through and tick off.

12:16
Linking it to issues / MR /etc is another interesting feature you can add, but once again, only do that once you have made a few releases with updates to the test plan, because linking data is worthless if you don't maintain and work with a test-plan, that's just extra work for no benefit at all.

12:17
A yearly review pass that goes through the test-plan and correlates it with your documentation is a good idea.  Removing all those steps that say that when a user clicks a folder a new spatial window should appear from nautilus  in a consistent place, etc. etc.
12:17
And also, if your test-plan is maintained through issues/MR's, it can sometimes point out where your documentation is lacking.
12:18
Because the feature that you could write a two-sentence test-plan entry for might need a few pages of documentation. 

don't even bother with covering the basic functionality that you expect it to have, that will follow sort of naturally.

12:20
And many of them can be integrated together.

user should be able to start a screen recording <by doing>
user should find their recording in <some folder>
user should be able to play recording by <something>
user should be able to upload recording by <some steps>


12:21
The test instructions then end up exercising a lot of steps, and may sometimes show something different
12:23
The basic of the test-plan is that you should be reasonably certain that the system tested does what an user would expect it to do.
12:23
And that you should be able to follow it without having too much experience on hand

Thib
12:24
I’d argue the less experience you have, the better the test is


Spider ( Spindel )
12:25
It depends on what you're testing.

12:26
If you're doing this kind of testing for an API rather than an application?
user should be able to get a new window on the screen by importing python module and writing <bar>  .
12:27
"but that would be done by automation!?"   
Except the automation wouldn't catch that there's a lack of installation instructions, and that the instruction contain python2  or were for installing as root.

12:29
We maintained test plans for API, system installation and upgrades. 
as well as for end user of the applications
API wise we just did the basics of hello world, but it was often enough to catch things like "the config string for databases changed" or that the documentation for something wasn't good enough

12:32
Because the testers are not developers, so they got stuck on getting their editor to work or figuring out that the config file and their code was not something else.

12:34
Some features of a test-plan that you may or may not want to use:

Deep links to wiki/documentaitton from test-case.
Scaffold data ( sets of files to work on when renaming, etc)

Deep links add another maintenance burden and can be painful, but will also allow you link to the fragment of documentation for the user.
Without it, your users might find it frustrating, with it, you might just document stuff "enough to pass the tests"  rather than make useful documentation

12:36
Scaffold data has the same problem, but without it users might not test what you think they are testing.

12:36
The reason I recommend keeping test-plans in the source is that different versions of software has different test-plans. 

12:37
If you expect nautilus in Gnome 40 to work according to how nautilus in gnome 2.0 was documented to work, your users will have a bad time.

12:38
But that also causes the pain if you deep link to things, as your test-plan might link to updated documentation rather than documentation for what it's supposed to be.

12:41
If anyone wants to know more, James Bach has some good writing, I'd probably recommend "Lessons Learned in Software Testing" as a first start

ThibaultMartin/TestingTeam (last edited 2021-02-07 11:45:23 by ThibaultMartin)