Faux Fish

After initial generative video testing, we’ve decided to move ahead with Runwayml to create our AI fish tank for the installation. The thing we like about Runwayml is that the initial 4 sec rendering is pretty good, but it gets more and more unsettling every time the simulation is extended.

Runwayml AI video still frames showing slippage into a more dream-like visualisation.

Unfortunately, the max output is around 16sec in 4 sec increments. To overcome this limitation, we extended the output to the max timeframe and used the last video frame to start a new generation. This seems to work fairly well, although the visualisation also loses colour after each consecutive rendering. This means that we will need to put the videos together in a video editor and then adjust the colour to ensure a more seamless and visually appealing output.

At the rate of AI development (and with more training data), I am sure video output will catch up to text-to-image applications and it will become increasingly difficult to easily determine if a video or image is AI generated. We may also start to regard some of the slight quirks of AI as normal visual experiences.

Previous
Previous

AI Window View

Next
Next

Auto and AI generated logos