Adobe has expanded its Firefly creative AI suite with new tools and models spanning audio, video, and imaging, deepening its push to make generative AI a core part of professional creative workflows.
Announced at the company’s annual Adobe MAX conference, the updates introduce Generate Soundtrack for creating fully licensed instrumental tracks, Generate Speech for lifelike multilingual voiceovers, and a new timeline-based video editor for producing and sequencing clips with AI assistance.
The new tools allow creators to edit directly within Firefly, mixing generative and uploaded content through both visual and text-based interfaces.
Adobe also unveiled Firefly Image Model 5, which produces photorealistic 4MP images and supports natural-language editing through a new Prompt to Edit feature. The company said the model delivers improved lighting, texture, and anatomic accuracy, as well as more coherent multi-layered compositions.
In addition, Firefly now integrates partner models from ElevenLabs, Google, OpenAI, Topaz Labs, Luma AI, and Runway, making it one of the broadest creative AI ecosystems available. Adobe is also expanding access to Firefly Custom Models, which let creators train private, personalised models to generate assets in their own style.
A new experimental tool, Project Moonlight, introduces a conversational AI assistant that can draw insights from creators’ social channels and projects to help them move from idea to finished content.
Most new features, including Firefly Image Model 5, Generate Soundtrack, and Generate Speech, are available in public beta, while the Firefly video editor, Custom Models, and Project Moonlight remain in private beta.
ALSO READ: Databricks Launches Data Intelligence for Cybersecurity
