A London-based artist named Matt DeLauriers has developed a tool to create color palettes from any text prompt, allowing someone to type in “beautiful sunset” and get a series of colors that match a typical sunset scene, e.g. ArsTechnica: Or you can go a bit more abstract by finding colors to match a ‘dreary and rainy Tuesday’. To achieve the effect, DesLauriers uses Stable Diffusion, an open-source image synthesis model, to create an image that matches the text prompt. A JavaScript GIF encoder called gifenc then extracts the palette information by analyzing the image and quantizing the colors to a specific set.
DesLauriers posted his code on GitHub; requires a local installation of Stable Diffusion and Node.JS. At this point, it’s an advanced prototype that requires some technical skill to set up, but it’s also a noteworthy example of the unexpected graphical innovations that can come from open-sourcing powerful image synthesis models. Stable Diffusion, which became open source on August 22, creates images from a neural network that has been trained on tens of millions of images retrieved from the Internet. Its ability to draw from a wide range of visual influences translates well to extracting color palette information. Other examples of palettes provided by DesLauriers include Tokyo Neon, which offers the colors of a vibrant Japanese cityscape, Vivid Coral, which echoes a coral reef with deep pinks and blues, and Green Garden, Blue Sky, which offers a rich pastoral scene DeLaurie took to Twitter earlier today to demonstrate how different quantization techniques (reducing the vast number of colors in an image to a few that represent the image) can produce different color palettes.