• Assessment research

 

“If you don’t know where you are, a map won’t help.” [Watts Humphrey]. Investigating the current situation is a crucial part of performing an assessment. Improvement suggestions are based on the discrepancy between the current situation and the business/improvement objectives. The investigation is performed according to the approach against the improvement architecture (Business objectives, Improvement objectives, Scope, Context and Constraints).

Research

Time, budget and priority
Assessment activities requires time from people in the organization, like for interviews.  Research activities have impact on business as usual.

Kick off with involved people in the organization
Management explains the importance of the assessment and builds on getting buy in from the people involved. The improvement architect (or the person that will do or lead the assessment) informs the organization about the way the assessment is organized. All BISCC aspects are in scope for the kick off.

Interviews
Interviews are prepared (determining specific objectives and scope of each interview, who to interview), held and notes are taken. This includes interviews with groups of people.

Assessment strategy, interview matrix
An assessment strategy help focussing assessment activities. An interview matrix show how interview topics are matched to the roles that people have in the organisation.

Questionnaires
Questionnaires are prepared (which questions, which answer options, who to distribute them to), distributed and the results are collected.

Documentation study
Documentation study is prepared (which documents to study, which criteria to use for document assessment), done and assessment notes are taken.

Observations
Observations are prepared (which scheduled meetings to attend, what specific objectes apply to observations, which other activities to observe?) executed and notes are taken.

Demonstrations
People demonstrate the results of their work while explaining why things are the way they are. Demonstrations give a great clue on the motivation of people and whether they feel appreciated.

Walking around
Walking around is a way to informally observe people at work.

Working along
Working along is an intrusive way of observation. The observer is taking part in the work and gets in depth insight.

Idea raising sessions
Idea raising sessions are part of small scale assessment approaches (like Compact TI).

Analysis
The information that is collected during the assessment is being analysed so that a clear picture of the current situation is formed. Oracles help recognising problems and consequently point towards potential improvements.

Assessment research heuristics

Planning support. Assessment activities go on top of regular project activities of people concerned. Planning interviews and other assessment activities can be a burden. When this happens or is anticipated, planning support from within the organization is needed.

An assessment process is exploratory. Insight gained from an interview, observation or study is used to drive, focus or adjust following assessment activities (extension of an interview, change the interview plan, add an observation, ask specific questions in the next interview, etc).

We don’t know what we don’t know. We need to be aware that important information can hide outside the initial scope of the research.

Plan for unforeseen research. During the analysis of research information, questions could be raised that require additional interviews, documentation study, observations or other research activities.

Follow the problems. When during a session a problem is found, further exploring this problem is more important that following any plan that was made for the assessment session, such as going through checkpoints. This provides more value to the organisation.

Trial balloon. During the assessments hypothesises are formed on the fly. These can be checked during interviews by sharing ideas with the interviewee.

Intermediate checks. Have regular meetings with important delegates or stakeholders during the assessment to check that the assessment is on the right track (could be a daily stand up).

Buy in. Although assessment sessions focus on getting information and finding problems, do not underestimate the positive effect these session can have on building on buy in with the people involved for changes that they may become involved in.

Prepare each research activity. The time of an interview and other information gathering activity is limited. Example: a checklist with a list of ‘must have’ questions, or topics to address.

Plan assessment sessions of typically 3/4 to 1.5 hours. Allow for at least 3/4 hours to get the bare minimum of information from an interview or observation. Interviewing is intensive; quality often drops after 1.5 hours. During a debrief it is concluded whether an assessment session provided sufficient information. When the session was ended with a list of remaining questions unanswered or topics uncovered, an additional session needs to be scheduled at an appropriate moment.

Checkpoints. Checkpoints in models are like test cases: they tend to focus on checking and, if used like that, are less powerful in finding problems.

Collect opinions. Ask questions like “what do you find of the quality of – some product made are task performed by others -?”. Examples: ask developers about the quality of testing and defect records; ask testers about the quality of the unit testing.

Ask open questions. Ask open questions starting with why, what, how etc. Let people tell a story, This gives more information in a session and gives a higher probability to find problems.

Ask closed questions.  This is done to check specific things, like “somebody said this or that …”. For instance used for checkpoint hunting and double checking.

Observe the work. People at work show the total picture. Non-verbal communication van be  observed. And things how the really are done. People telling about the work they do only tells a part of the story.

Verify, double check. It is good practice to verify or double check findings that

  • are inconsistent with other information
  • are too consistent (to good to be true)
  • are surprising
  • are crucial information (positive: strong area, as well as negative: there is an important problem)

Probe effects.

  • During observations, the presence of an observer can have impact on what is going on: people may behave differently when being watched.
  • When a representative of the organisation is part of the assessment / observation team, people may behave differently and may talk less openly.
  • When a representative of the organisation is part of the assessment / observation team, she may recognise false or less representative information, e.g. when people tell a too optimistic version of a story (this is a positive probe effect).

Speak with management first. Management can draw the big picture, which is convenient to have at hand when preparing for assessment sessions that go into more detail.

Speak with management first, but be careful. Management has a tendency to tell how things should work and often seems to have a impression that things go more smoothly then the go according to their people. The information from management can be analysed against the information got from other people later on.

Speak with management later. An argument to speak with management later is for ‘trial ballooning’.

Representativeness – top of the class. Organisations that want to score tend to put ‘top of the class’ people or processes forward for assessing. This influences representativeness and puts integrity at stake.

Visualise, use whiteboards.. Using pictures to explain questions or check answers improves the quality of the communication and thus the quality of the information gained. White boards and blank paper are valuable assessment tools.

Checkpoint hunting. The main issues are found, but some areas in a model are covered not so well yet. When the approach prescribes that a model is used and analysed, some assessment sessions are focussed on keeping close to the checkpoints and being able to (not) check them based on concrete information. The improvement architecture may dictate this.

Dress code. Properly dressing is important. Auditing allow for formal dressing to create distance and to radiate authority. Matching the dress code to what is common in the organisation helps in creating an atmosphere of trust.

Jumping to conclusions. It regularly happens that the assessment team encounters information that clearly seems to identify problems that they were looking for (or not looking for). The sensation ‘we have found it!’ is understandable, but be aware of developing tunnel vision.

The Phase of the Great Confusion. Bang! it happens: a new piece of information completely is not in line with the model built on information so far.

  • could  be caused by tunnel vision, information from more sources is needed to better understand the inconsistency
  • if the inconsistency stays, it is a finding from the assessment
  • if everything is consistent and the phase is not happening: good moment for the assessment team to check itself on tunnel vision; it happens though that all people in an organisation are very well aligned

Checkpoint pass or fail dilemma. A checkpoint is set to YES or PASSED when it is satisfied completely. But what if it is not completely satisfied, while there seems not to be a problem? Then this may help: is there an improvement suggestion to mention that is relevant in context? If this is not the case, it has no value in failing the checkpoint.

Maybe’s. When there is insufficient information to declare a checkpoint being OK of NOK, it can be considered to put a question mark on it.

Take notes during or after each research activity.  Did we get the information we wanted? Anything missed? Any inconclusives, contradictions with other information? Any unforseen remarks?

Analyse as soon as possible. Analyse the information from the research activities as soon as possible, to prevent lost data and confusion with later information. When a model is used: link analysis data to the model (often to checkpoints).

update fill in the notes analysis per interview/ after each interview. Archive this for future reference/ analysis.

Use mindmaps. Mindmap software are useful tools to take notes during exploratory assessment activities.