Luma, a company specializing in video and 3D model creation using artificial intelligence, has launched a new neural network, Ray3 Modify.
The tool allows for the modification of existing videos using reference images of characters while preserving the original performance.
Users can specify the start and end frames for the model to generate intermediate content.
The company stated that Ray3 Modify addresses the challenges of maintaining human characteristics when editing or generating effects. The LLM more accurately follows the original video, preserving movements, timing, gaze direction, and emotional expression. This enables studios to use live actors in creative clips.
With this new solution, users can set a character reference for video transformation, altering a person’s appearance. Information about costumes, likeness, and identity is preserved throughout the shoot.
“Generative video models are incredibly expressive, but they are difficult to control. Ray3 Modify combines richness and reality, giving creative professionals full control. They can film performances on camera and then immediately alter them: transform the location, costumes, or even ‘reshoot’ a scene using AI without physical presence,” said Luma AI co-founder and CEO Amit Jain.
The new model is available on the Dream Machine platform. Its release coincided with the raising of $900 million in a funding round led by Saudi firm Humain. Existing investors in the project, including a16z, Amplify Partners, and Matrix Partners, also participated in the financing.
The startup plans to build a 2 GW AI cluster in Saudi Arabia in collaboration with Humain.
In May, American tech firms signed an agreement with a company from an Arab country to significantly develop the artificial intelligence sector in the region.
In December, AI startup Runway released a new video model, Gen 4.5, which outperforms similar solutions from competitors in independent testing.
