Experience evaluations for human–computer co-creative processes – planning and conducting an evaluation in practice
In human–computer co-creativity, humans and creative computational algorithms create together. Too often, only the creative algorithms and their outcomes are evaluated when studying these co-creative processes, leaving the human participants to little attention. This paper presents a case study emph...
Saved in:
| Main Authors: | , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Taylor & Francis Group
2019-01-01
|
| Series: | Connection Science |
| Subjects: | |
| Online Access: | http://dx.doi.org/10.1080/09540091.2018.1432566 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1850120605817372672 |
|---|---|
| author | Anna Kantosalo Sirpa Riihiaho |
| author_facet | Anna Kantosalo Sirpa Riihiaho |
| author_sort | Anna Kantosalo |
| collection | DOAJ |
| description | In human–computer co-creativity, humans and creative computational algorithms create together. Too often, only the creative algorithms and their outcomes are evaluated when studying these co-creative processes, leaving the human participants to little attention. This paper presents a case study emphasising the human experiences when evaluating the use of a co-creative poetry writing system called the Poetry Machine. The co-creative process was evaluated using seven metrics: Fun, Enjoyment, Expressiveness, Outcome satisfaction, Collaboration, Ease of writing, and Ownership. The metrics were studied in a comparative setting using three co-creation processes: a human–computer, a human–human, and a human–human–computer co-creation process. Twelve pupils of age 10–11 attended the studies in six pairs trying out all the alternative writing processes. The study methods included observation in paired-user testing, questionnaires, and interview. The observations were complemented with analyses of the video recordings of the evaluation sessions. According to statistical analyses, Collaboration was the strongest in human–human–computer co-creation, and weakest in human–computer co-creation. Ownership was just the opposite: weakest in human–human–computer co-creation, and strongest in human–computer co-creation. Other metrics did not produce statistically significant results. In addition to the results, this paper presents the lessons learned in the evaluations with children using the selected methods. |
| format | Article |
| id | doaj-art-607f7d717a34422d8f88002b4fcf1e39 |
| institution | OA Journals |
| issn | 0954-0091 1360-0494 |
| language | English |
| publishDate | 2019-01-01 |
| publisher | Taylor & Francis Group |
| record_format | Article |
| series | Connection Science |
| spelling | doaj-art-607f7d717a34422d8f88002b4fcf1e392025-08-20T02:35:19ZengTaylor & Francis GroupConnection Science0954-00911360-04942019-01-01311608110.1080/09540091.2018.14325661432566Experience evaluations for human–computer co-creative processes – planning and conducting an evaluation in practiceAnna Kantosalo0Sirpa Riihiaho1University of HelsinkiUniversity of HelsinkiIn human–computer co-creativity, humans and creative computational algorithms create together. Too often, only the creative algorithms and their outcomes are evaluated when studying these co-creative processes, leaving the human participants to little attention. This paper presents a case study emphasising the human experiences when evaluating the use of a co-creative poetry writing system called the Poetry Machine. The co-creative process was evaluated using seven metrics: Fun, Enjoyment, Expressiveness, Outcome satisfaction, Collaboration, Ease of writing, and Ownership. The metrics were studied in a comparative setting using three co-creation processes: a human–computer, a human–human, and a human–human–computer co-creation process. Twelve pupils of age 10–11 attended the studies in six pairs trying out all the alternative writing processes. The study methods included observation in paired-user testing, questionnaires, and interview. The observations were complemented with analyses of the video recordings of the evaluation sessions. According to statistical analyses, Collaboration was the strongest in human–human–computer co-creation, and weakest in human–computer co-creation. Ownership was just the opposite: weakest in human–human–computer co-creation, and strongest in human–computer co-creation. Other metrics did not produce statistically significant results. In addition to the results, this paper presents the lessons learned in the evaluations with children using the selected methods.http://dx.doi.org/10.1080/09540091.2018.1432566computational creativityhuman–computer co-creativityuser experienceevaluation metricschild–computer interaction |
| spellingShingle | Anna Kantosalo Sirpa Riihiaho Experience evaluations for human–computer co-creative processes – planning and conducting an evaluation in practice Connection Science computational creativity human–computer co-creativity user experience evaluation metrics child–computer interaction |
| title | Experience evaluations for human–computer co-creative processes – planning and conducting an evaluation in practice |
| title_full | Experience evaluations for human–computer co-creative processes – planning and conducting an evaluation in practice |
| title_fullStr | Experience evaluations for human–computer co-creative processes – planning and conducting an evaluation in practice |
| title_full_unstemmed | Experience evaluations for human–computer co-creative processes – planning and conducting an evaluation in practice |
| title_short | Experience evaluations for human–computer co-creative processes – planning and conducting an evaluation in practice |
| title_sort | experience evaluations for human computer co creative processes planning and conducting an evaluation in practice |
| topic | computational creativity human–computer co-creativity user experience evaluation metrics child–computer interaction |
| url | http://dx.doi.org/10.1080/09540091.2018.1432566 |
| work_keys_str_mv | AT annakantosalo experienceevaluationsforhumancomputercocreativeprocessesplanningandconductinganevaluationinpractice AT sirpariihiaho experienceevaluationsforhumancomputercocreativeprocessesplanningandconductinganevaluationinpractice |