abstract
- In 2018, we conducted a within-subjects experiment to investigate how individual quality characteristics of vision videos relate to the overall quality of vision videos from a developer's point of view. 139 undergraduate students who had the role of a developer and actively developed software in projects with real customers at the time of the experiment participated in the experiment. The subjects can be considered as developers due to their experience at the time of the experiment. The subjects were put in the situation that they join an ongoing project in their familiar role as a developer. In this context, we showed the 8 vision videos (one after the other) always with the intent to share the vision of the particular project with the subjects. The undergraduate students subjectively assessed the overall quality and 15 individual quality characteristics of the 8 vision videos by completing an assessment form for each video. After data cleaning, the final data set contains 952 complete assessments of 119 subjects for the 8 vision videos. Each entry of the data set consists of: Entry ID: The ID of the entry in the dataset. Subject ID: The ID of the subject. Video ID: The ID of the vision video assessed. Overall quality: The subject's assessment of the overall quality of the vision video. Image quality: The subject's assessment of the visual quality of the image of the vision video. Sound quality: The subject's assessment of the auditory quality of the sound of the vision video. Video length [s]: The duration of the vision video in seconds. Focus: The subject's assessment of the compact representation of the vision which is presented in the vision video. Plot: The subject's assessment of the structured presentation of the content of the vision video. Prior knowledge: The subject's assessment of the presupposed prior knowledge to understand the content of the vision video. Clarity: The subject's assessment of the intelligibility of the aspired goals of the vision which is presented in the vision video. Essence: The subject's assessment of the amount of important core elements, e.g., persons, locations, and entities, which are to be presented in the vision video. Clutter: The subject's assessment of the amount of disrupting and distracting elements, e.g., background actions or noises, that can be inadvertently recorded in the vision video. Completeness: The subject's assessment of the coverage of the three contents of a vision which is presented in the vision video, i.e., the considered problem, the proposed solution, and the improvement of the problem due to the solution. Pleasure: The subject's assessment of the enjoyment of watching the vision video. Intention: The subject's assessment of how well the vision video is suitable for the intended purpose of the given scenario. Sense of responsibility: The subject's assessment of the compliance of the vision video with legal regulations. Support: The subject's assessment of his or her level of acceptance of the vision which is presented in the vision video. Stability: The subject's assessment of the consistency of the vision which is presented in the vision video. This dataset includes the following files: "Dataset_Assessments.xlsx" contains the anonymized 952 assessments of the 119 subjects for the 8 vision videos "Assessment_form.docx" contains the assessment form which was used to assess each of the 8 vision videos "Assessment_form.pdf" contains the assessment form which was used to assess each of the 8 vision videos The 8 vision videos are not included in this dataset since we do not have the explicit consent of the actors to distribute the vision videos. This experiment was designed, conducted, and analyzed by Oliver Karras (@KarrasOliver), Kurt Schneider, and Samuel A. Fricker (@samuelfricker).