AI Augmented Design Workflows / workshop

Below some of the approaches, ideas and tools used.

enjoy.

Why am I talking about this, now?

Because exploring creative ways to use new technologies is my job :)

teaching machines & machines learning

The AI space as a pixel soup to hunter-gather your design building blocks, with intent

The design workflow is already filled by AI tools and processes

GPU ❤️‍🔥 +

DATA (extractivism) ⛏️ = new AI models

⏳ research -> concept -> application

micro-Apps for every workflow

new tools

= new styles

AI as a "mainstreamification" process

AI rise the standards of design

…and this is good, because more people participate, and the world is nicer. (designwise only)

"design" has always been about prompting

Mediocrity is automated

You'll need to do the hard 10% of the job.

Designer + AI

> Designer


Worldbuilding as a framework for designing spaces, objects and fashion

read more -> Worldbuilding

using AI with intent

hack the input/output of your AI systems


Tools:

We used a collaborative space to interact with text and image models

try it -> fermat ws

We also had 15 computers running stablediffusion locally using SD-web-ui , installed with several models and extensions.

And bunch of huggingface spaces setup with large GPUs to run fast and smooth experiments:

extracting depth from image

using physical mockups + photo to guide the image generation process

conversational image editting

We also explored workflow-specific tools as:

And reviewed guides and helpers:

And also talked about what's coming:

in short: AI is good, and you too

🙌

in short: AI is good, and you too 🙌


Glossary & Links

Awesome Generative AI -> https://github.com/steven2358/awesome-generative-ai

Machine Learning for Art -> https://ml4a.net/

AI + Design Usecases -> https://aidesigntools.softr.app/

Diffusion Bias explorer -> https://huggingface.co/spaces/society-ethics/DiffusionBiasExplorer

Stable Diffusion is a deep learning, text-to-image model released in 2022. It is primarily used to generate detailed images conditioned on text descriptions, though it can also be applied to other tasks such as inpainting, outpainting, and generating image-to-image translations guided by a text prompt.[3]

Stable Diffusion is a latent diffusion model, a kind of deep generative neural network developed by the CompVis group at LMU Munich.[4] The model has been released by a collaboration of Stability AI, CompVis LMU, and Runway with support from EleutherAI and LAION.[5][1][6] In October 2022, Stability AI raised US$101 million in a round led by Lightspeed Venture Partners and Coatue Management.[7]

Stable Diffusion's code and model weights have been released publicly,[8] and it can run on most consumer hardware equipped with a modest GPU with at least 8 GB VRAM. This marked a departure from previous proprietary text-to-image models such as DALL-E and Midjourney which were accessible only via cloud services.[9][10]


🙋 Continue the conversation in this thread .> https://twitter.com/cunicode/status/1631386323519938574

Tags