Google and Apple ramp up their investment in generative artificial intelligence, entering the field of music composition.
Google announced on Wednesday in a blog post that its Gemini AI assistant can now generate 30-second music tracks based on text, photos, or videos uploaded by users using Google DeepMind's latest Lyria 3 model. The feature can generate custom lyrics or purely instrumental audio, will be available to users aged 18 and above, and supports multiple languages. Google stated that the new feature will first roll out on the Gemini desktop version and will later be available on the mobile app in the coming days. Google also mentioned that its popular image generation model, Nano Banana, will create custom covers for the music tracks, adding visual effects when users share the track links. Meanwhile, Apple announced this week that users will soon be able to use artificial intelligence on Apple Music to create playlists. The feature called "Playlist Playground" will use Apple Intelligence to turn text prompts into playlists containing cover images, descriptions, and 25 songs. This feature is included in the test version of iOS 26.4 released on Monday and will be more widely available in the spring this year. Apple Music's new feature will compete with similar products from Spotify.
Latest

