Experimenting with AI

Earth-73
Using a GPU-based alpha masked diffusion tool called Stable Warpfusion, I animated a video shot with a GoPro strapped to my bike. I prompted the tool with cyberpunk-inspired words like "dystopian," "neon lighting," and "futuristic." Then, using DaVinci Resolve, I overlaid the original and animated videos, adding a glitch effect to create a dimension-shifting experience, heavily inspired by the film "Spider-Man: Across the Spider-Verse."

Stable Diffusion
Using RunDiffusion, I generated highly visual imagery from a prompt. Unlike Stable Warpfusion, which requires a reference video, Stable Diffusion creates video purely from the prompt. Throughout these videos, I experimented with camera movements in 3D space, enabling complex and dynamic motion. I also tried different prompts to observe how they would affect the images produced, exploring the tool's creative potential.

Ghibli in AI?
As a kid, I grew up watching Ghibli films and was always captivated by their detailed and beautiful animation. Ghibli's distinctive visual quality comes from their use of cel animation, where each frame is hand-drawn. With this in mind, I wanted to see how well AI could replicate Ghibli's unique style. While the AI-generated videos show some misinterpretations of objects, they demonstrate that AI is getting closer to mimicking the overall style. It may only be a few years before we can create exact Ghibli-like animations through AI.