Iteration #2: Simulated Selves - In Conversation
Our subsequent iteration aimed to create a more conversational display in which full sculptures of the artists are rendered in a seated and more relaxed position. We felt that this would enable viewers to sit with, or near, the sculptures. Rather then using projection-mapping, we shifted the AI avatars to a screen-based display. This would enable viewers to clearly see the digital doubles in a daylight environment.
The sculptural design for this version was co-created using AI image generators including Craiyon, DALL-E and Stable Diffusion by using text-to-image prompts describing sculptures and mannequins of seated figures in conversation with each other. The resulting outputs revealed a tendency of the then-current AI generators to create multi-limbed humanoids where hands and\other extremities are partial, disjointed and/or merged. While the technology has improved dramatically over the past six months, there are still limb glitches at times where figures are generated with an extra arm or leg.
Mockup of sculptures with screen-based displays
For this design, we revisited the idea of including interaction via a telephone interface. Rather than including individual phones for each sculpture, we decided that a single central phone would be suitable as a interface. To accommodate a short development timespan, we also opted for a simpler interactive question/answer model where participants could only ask questions on topics related to the focus of the project.
Phone interface mockup.
See further iterations: