The app world never stays still for long.
The second half of December brought its share of fresh updates and clever improvements, and we’ve been keeping an eye on them.
Time to recap what’s been happening so far this month.
Contents
- Zoom brings its AI assistant to the web and opens it to free users
- Adobe Firefly adds prompt based video editing and expands its model lineup
- Google brings Opal vibe coding straight into Gemini
- Google makes Gemini 3 Flash the default model in the Gemini app
- Meta is building new image and video AI models for a 2026 launch
Zoom brings its AI assistant to the web and opens it to free users
Zoom has rolled out its AI Companion to the web as part of AI Companion 3.0, and for the first time it is available to free users. Users can now get meeting summaries, action items, in-meeting questions, and AI notes, with a monthly limit on usage.
The web version also adds a new surface with prompts to show what the assistant can do, plus deeper productivity features like daily recap reports, follow-up task creation, and draft emails.
Beyond meetings, the AI Companion can help draft and edit documents based on meeting context, then move them into Zoom Docs for collaboration or export them to formats like PDF or Word.
👉 Learn more about Zoom AI Companion 3.0 and its new web experience.
Adobe Firefly adds prompt based video editing and expands its model lineup
Adobe is upgrading Firefly with a new video editor that lets you tweak videos using text prompts instead of regenerating the whole clip. You can now adjust elements like colors camera moves and specific scenes, all from a timeline style editor.
Firefly is also opening the door to more models. Users can edit videos with Runway’s Aleph, upscale clips to 1080p or 4K with Topaz Astra, and generate images with Black Forest Labs’ FLUX.2.
👉 Learn more about the latest Adobe Firefly updates and video editing features.
Google brings Opal vibe coding straight into Gemini
Google is integrating its vibe coding tool Opal into the Gemini web app, making it easier to build small AI powered apps without writing code. From inside Gemini, users can now create custom mini apps called Gems just by describing what they want in plain language.
Opal includes a visual editor that turns prompts into clear steps, which users can rearrange and connect to shape how their app works. If you want more control, you can jump into Opal’s advanced editor on the web and keep refining or reusing your apps later.
👉 Learn more about Opal and how it works inside Gemini.
Google makes Gemini 3 Flash the default model in the Gemini app
Google has launched Gemini 3 Flash, a faster and more efficient model that now becomes the default in the Gemini app and AI Search.
The new model delivers stronger reasoning and multimodal skills, so it works better with text, images, video, and audio. Users can get more visual answers, analyze media, or even create quick app prototypes directly inside Gemini.
👉 Learn more about Gemini 3 Flash.
Meta is building new image and video AI models for a 2026 launch
Meta is working on a new generation of AI models set to arrive in the first half of 2026. The roadmap includes an image and video model internally called “Mango” and a text based model known as “Avocado,” both developed inside Meta’s new superintelligence.
The goal is ambitious. Meta wants its next text model to be much stronger at coding, while its visual models aim to better understand the world, reason about what they see, and take action without needing exhaustive training.
👉 Learn more about Meta’s next big AI.
Plenty of updates to close out December. Stay tuned to our blog for the next round of highlights coming in next year.

