• CDTV op basis van vermoedens

 
CDTV op basis van vermoedens
7 juni 2018
0 reacties

Vaak moet men eerst een probleem zien of voelen voordat men wil veranderen. Voor een organisatie die slechts vage ideeën heeft over testverbetering hebben we hier een waslijst aan problemen die de organisatie heeft. Tenminste dat vermoeden we op basis van een analyse van een stapel resultaten van assessments bij allerlei organisaties. Het is niet te hopen dat een organisatie alle problemen tegelijk heeft, maar door de ‘vermoedens’ op tafel te gooien en betrokkenen de vraag te stellen: “Waarom denk je dat je dit probleem niet hebt?” krijg je de tongen wel los. Vaak herkent men snel een aantal knelpunten, zodat een vaag idee snel tot een concreet te verbeteren onderwerp kan worden geüpgraded. Hieronder vind je de lijst die we recent hebben samengesteld.

Basics of testing

  • It is not known what the most riskful use cases for the customer are and/or tests are not prioritized.
  • No test design techniques are used to get a specific test coverage.
  • Too much time is spent on writing test cases.
  • Testing is not done according to customer scenario’s.
  • There are no acceptance criteria for the customer requirements / user stories
  • Test design is difficult because the system design is not clear.
  • The team has not enough domain knowledge to validate the software.

Planning, estimation, realization

  • Testing tasks are not sufficiently estimated / planned.
  • User stories are too big or too small or to technical
  • Technical debt in the software often causes bugs in production.
  • Many bugs are found and fixed not earlier than in the ‘hardening sprint’ at the end of the project.
  • Testing is never ready at the end of the sprint or when the software is released.
  • There are always many things that the customer wants changed.
  • The customer finds serious bugs in the software.

Test management and strategy

  • There is no generic test approach.
  • DOD does not include testing.
  • Acceptance testing is not part of the sprints/iterations.
  • Software is ‘thrown over the wall’ to the customer.
  • There is no (test) reporting about the product, the testing done and the quality of the testing done
  • It is not clear what bugs are found, what areas of the software are less stable/are of less quality than others.
  • Customers don’t know what has been tested.

Test profession and teamwork

  • Members of the team are not educated in testing.
  • Testing is done exploratory but not in a structured way.
  • Members of the team are not eager to pick up the testing task.
  • Testing is only done by ‘the tester’.
  • The tester is not involved in refinement.
  • Testers have no experience in programming.
  • Developers have no experience in testing

Test automation, unit tests, testing pyramid

  • There are no or few unit tests.
  • There are no or hardly any automated tests.
  • (Automated) unit test has 100% statement coverage but still the customer finds many problems.
  • TDD, BDD and ATDD are not done.
  • Automated tests often fail.
  • There is no CI/CD process
  • Test automation costs too much time or effort.

Test environment

  • Test environments are not sufficiently available.
  • Test environments are not sufficiently representative.

Reageren

Om SPAM te voorkomen wordt uw bericht na goedkeuring door de webmaster geplaatst.

Reacties

Er zijn nog geen reacties.