By Ryan Daws |
| TechForge Media
Ryan is a senior editor at TechForge Media with over a decade of experience covering the latest technology and interviewing leading industry figures. He can often be sighted at tech conferences with a strong coffee in one hand and a laptop in the other. If it's geeky, he’s probably into it. Find him on Twitter: @Gadget_Ry
New optimisations have enabled M2-based Mac devices to generate Stable Diffusion images in under 18 seconds.
Stable Diffusion is an AI image generator similar to DALL-E. Users can input a text prompt and the AI will produce an image that’s often far better than what most of us mere mortals can do.
Apple is a supporter of the Stable Diffusion project and posted an update on its machine learning blog this week about how it’s improving the performance on Macs.
“Beyond image generation from text prompts, developers are also discovering other creative uses for Stable Diffusion, such as image editing, in-painting, out-painting, super-resolution, style transfer and even color palette generation,” wrote Apple.
“With the growing number of applications of Stable Diffusion, ensuring that developers can leverage this technology effectively is important for creating apps that creatives everywhere will be able to use.”
Apple highlights there are many reasons for people to want to run Stable Diffusion locally instead of a server, including:
- Safeguarding privacy — User data remains on-device.
- More flexibility — Users don’t require an internet connection.
- Reduced cost — Users can eliminate server-related costs.
Apple says that it has released optimisations to Core ML for Stable Diffusion in macOS 13.1 and iOS 16.2, along with code to help get started on M-based devices.
Following the optimisations, a baseline M2 Macbook Air can generate an image using a 50 inference steps Stable Diffusion model in under 18 seconds. Arguably more impressively, even an M1 iPad Pro can do the job in under 30 seconds.
The release also features a Python package for converting Stable Diffusion models from PyTorch to Core ML using diffusers and coremltools, as well as a Swift package to deploy the models.
Detailed instructions on benchmarking and deployment are available on the Core ML Stable Diffusion repo here.
(Image Credit: Apple)
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.