A systematically conducted evaluation is not based on subjective assessments of individual persons but offers reliable facts about the quality of trainings. The standardized procedure of an evaluation leads to a high degree of comparability of the results.

At FIRE we understand evaluation to be the description and assessment of the training of rescue workers. For this purpose, data is collected, analyzed and interpreted.

 An evaluation shows what is already going well and which areas can be improved. If several evaluations are conducted, it is possible to compare different trainings or to look at a development over time.

Step 1
Step 1


A solid data basis ensures reliable results. Therefore, it is important to have good measuring instruments (usually questionnaires). At FIRE we have already developed some evaluation questionnaires especially for the fire department context. These provide information about different dimensions of training quality, e.g. instructor behavior or level of requirements. First of all, the appropriate questionnaire has to be selected.

To ensure that everything runs smoothly, the data collection should be well prepared. Here you will find a checklist:

Checklist Preparation & Data Collection

Step 2
Step 2

Data Collection

The data collection takes place at the end of a course. If an exam is scheduled, the data collection takes place before the exam. An exam evaluation (FIRE-P) takes place after the exam but before the results are announced.

To motivate students to participate in the evaluation, the purpose of the evaluation should be explained. Participants should realize that their feedback is valued and has a positive effect (e.g. improvement of teaching quality, learning opportunities for the instructors, recognition for good teaching).

For more information, see the checklist under Step 1.

Step 3
Step 3


There are fixed rules for the analysis of the questionnaires, e.g. in which order the individual values are summed up or how to deal with missing values. This ensures that the results are always the same - regardless of who analyzed the questionnaires. This is called evaluation objectivity. For the FIRE questionnaires, there are analysis aids to support the statistical analysis and result sheets for a clear presentation of the results.

Instructions for Data Analysis

Step 4
Step 4


Only when all results are complete should the interpretation and classification begin.

First of all, start with the calculated means. The scale descriptions help to interpret the mean values. With the following guiding questions, you can better classify the values:

  • Are the absolute values in the lower, middle or upper range of the scale?
  • How are the values relative to each other? Where are the strengths and weaknesses?
  • How are the values compared to previous or other courses?

Afterwards usually follows the question: Why did the feedback turn out the way it did? To answer this question, the results should always be discussed with the participants if possible. Open comments can also provide important clues here.

Step 5
Step 5


The evaluation process does not end with the analysis and interpretation of the results. Now it is a matter of drawing practical benefits from the results.

  • If the evaluation results are very good, those responsible - usually the team of instructors - should experience the corresponding appreciation. With less good results, one should not look for culprits. Rather, the focus should be on how better results can be achieved in future courses and what support may be necessary to achieve them.
  • Based on the evaluation results, concrete goals for future courses should be formulated. Which measures can be used to build on strengths and reduce weaknesses? Which suggestions of the participants can be implemented?
  • The evaluation process should be repeated in future courses. This way, it is possible to check whether the measures taken bring the desired success.