- Feb 14, 2004
- 48,881
- 5,535
- 136
Ahh man here we go...AI for animation! The labor reduction this technology is going to offer is going to open the doors for some REALLY incredible animated movies in the future!
I got heavily into filming & editing in grade school...back in the 90's, all of the processes were so expensive & complicated that a lot of techniques were simply out of reach. MiniDV (digital tape for home) came out in 1995 & it was SUCH a chore to do editing at home on a computer, but at least you could do it! Now you can get things like the free LiveLinkFace app to do real-time motion-capture animation in the Unreal game engine:
Which, when combined with AI video overlay generation, can take previous iterative jumps like Deepfakes to the next level. Remember this video by Jordan Peele a few years ago?
https://youtu.be/cQ54GDm1eL0
So now you've got crazy stuff like Gen-1 from Runway Labs:
Scroll down the website & use the sliders to see examples of stylistic overlays on existing video footage:
This is where it starts getting scary...right now it's doing motion-video replacement using text prompts, but generative-fill & spot-editing are going to be the next tools released:
And you don't even have to do motion-capture anymore...you can use untextured renders as the source material to apply design styles to the finished images & videos. The power this has for cinematic storytelling & the doors it will open to end-users who don't have high-end budget or resources is going to be AMAZING!
Runway's Gen-2 takes that to the next level with video synthesis from text, images, and video clips (i.e. not just text prompts as the source anymore!). Scroll through the examples here:
Because computers excel at math, brush-based vector artwork (ex. digital art from say Illustrator & Procreate) can be animated with EXTREMELY good results:
Plus with Gen-2, now you can do things like take an actual photo & not only animate it, but also stylize it & use generative fill to change the background, lighting, etc. (check out the animation of the video below on the link above)
This is all in the VERY early stages of development, so the quality is low, but soon, it will be impossible to tell what's real in videos anymore, which is REALLY scary! What gets even CRAZIER is thanks to the mobile supercomputers in our pocket, they now have a smartphone app for doing stylistic overlays in real-time! Imagine walking around with a VR headset outside & just living in a completely different world, Ready Player One-style!
This is going to open the doors to a WORLD of amazing films & animated movies for people who don't have budgets in the millions or vast arrays of supercomputer rendering farms available!
I got heavily into filming & editing in grade school...back in the 90's, all of the processes were so expensive & complicated that a lot of techniques were simply out of reach. MiniDV (digital tape for home) came out in 1995 & it was SUCH a chore to do editing at home on a computer, but at least you could do it! Now you can get things like the free LiveLinkFace app to do real-time motion-capture animation in the Unreal game engine:
Which, when combined with AI video overlay generation, can take previous iterative jumps like Deepfakes to the next level. Remember this video by Jordan Peele a few years ago?
https://youtu.be/cQ54GDm1eL0
So now you've got crazy stuff like Gen-1 from Runway Labs:
Scroll down the website & use the sliders to see examples of stylistic overlays on existing video footage:
This is where it starts getting scary...right now it's doing motion-video replacement using text prompts, but generative-fill & spot-editing are going to be the next tools released:
And you don't even have to do motion-capture anymore...you can use untextured renders as the source material to apply design styles to the finished images & videos. The power this has for cinematic storytelling & the doors it will open to end-users who don't have high-end budget or resources is going to be AMAZING!
Runway's Gen-2 takes that to the next level with video synthesis from text, images, and video clips (i.e. not just text prompts as the source anymore!). Scroll through the examples here:
Gen-2 by Runway
A multimodal AI system that can generate novel videos with text, images or video clips.
research.runwayml.com
Because computers excel at math, brush-based vector artwork (ex. digital art from say Illustrator & Procreate) can be animated with EXTREMELY good results:
September 4, 2023 - Runway Gen-2 Testing | Image and text description
Experiment No: 32Midjourney 5.2, Runway Gen-2
youtu.be
Plus with Gen-2, now you can do things like take an actual photo & not only animate it, but also stylize it & use generative fill to change the background, lighting, etc. (check out the animation of the video below on the link above)
This is all in the VERY early stages of development, so the quality is low, but soon, it will be impossible to tell what's real in videos anymore, which is REALLY scary! What gets even CRAZIER is thanks to the mobile supercomputers in our pocket, they now have a smartphone app for doing stylistic overlays in real-time! Imagine walking around with a VR headset outside & just living in a completely different world, Ready Player One-style!
Introducing: Runway for Mobile
The magic of Gen-1. Now on your phone. Download the new Runway iOS app today: https://apple.co/41zhB5BWant more helpful tutorials and video content? Follow u...
youtu.be
This is going to open the doors to a WORLD of amazing films & animated movies for people who don't have budgets in the millions or vast arrays of supercomputer rendering farms available!