Category:
Speculative Design
Author:
Pavel Danyi
Read:
11 mins
Location:
Prague
Date:
July 21, 2025
Lenka, what led you to work with synthetic media and artificial intelligence?
My professional background lies in critical design. I've always focused on projects exploring various social phenomena that left people disoriented or unable to grasp the issues at hand. Back then, these were primarily questions related to visual communication and transparency, such as problems with visual smog in cities or so called "information bubbles" on the internet. I delved into how the constant online environment affects our ability to visually perceive information and how we become susceptible to manipulation.
Then, in 2018, deepfakes emerged, directly tying into the question of visual transparency. I realized that people would struggle to understand and properly assess what is reality and what is fiction. This realization led me to dive deeper into the topic and explore it more thoroughly. I discovered that striving for absolute visual transparency is a hopeless and futile endeavor. It became crucial for me to start informing the public about the existence of tools for creating deepfakes. Creating synthetic content has a much broader scope, which is why it's important not only to study these technologies but also to apply them in practice, bridging theoretical knowledge with practical skills. That's why I began organizing workshops, and alongside lectures on artificial intelligence, I started designing various tools to help people better navigate what can be "faked" using AI. That formed the foundation of my focus in this area.
When you mention tools, I noticed that you use special cards in your workshops. Could you describe exactly how you use them and how they become part of the overall workshop concept?
Originally, the workshop was conceived directly around these cards, but gradually I started using them in all my workshops.
Their systematic arrangement into color-coded categories helps participants better and more quickly understand the possibilities of AI and machine learning. This setup allows for easy differentiation between models for creating videos, images, text, and audio, or models that serve as parts of a larger whole for post-production of existing content. In the last year and a half, developments toward multimodal models have made traditional categorization less precise. Nevertheless, the cards remain a valuable source of information, containing names of specific models and links to academic papers or GitHub pages if the models are open source.
Currently, I'm working on a complete overhaul of the card system to adapt the entire concept and categories to current AI trends. It's quite fun design work, as it's like a logical puzzle to give it a logical structure.
How have technologies like deepfakes, commonly perceived as threats, influenced you in your professional life and in your perception of the world around us?
Deepfakes no longer scare me, because I've realized that fighting against them is pointless. It's concerning that even our parents—and perhaps my generation—can't reliably detect a well-made deepfake, but that's precisely its purpose. Just a few years ago, it was possible to spot minor imperfections in deepfake videos—small artifacts that more experienced users might notice, such as blurred pixels or inconsistencies in the image. However, as the technology evolves, these generative AI tools are becoming increasingly realistic, to the point where it's almost impossible to distinguish them from real footage. That's why it's crucial to raise awareness about AI and teach people that even a recording can be manipulated. History has shown that with every new technology comes a period of confusion and disinformation just like when photography emerged and people began manipulating images, creating seemingly supernatural phenomena.
In today's world, where the political scene and waves of disinformation influence the online environment, I've become increasingly skeptical. For less important news, I don't dwell too long, but for key information, I take the time for thorough verification. I don't put trust in just one video or source; I actively seek out additional information to confirm the content's authenticity. Often, people slip into labeling true news as "fake news" or "deepfake," which I believe is a bigger problem. This mindset has made me more cautious and critical toward the information I accept.
How, in your opinion, has the world of design changed thanks to technologies like artificial intelligence?
I've noticed that many people are excellently using AI to streamline their work, which I think is great. The key, however, is how individuals integrate these tools into their workflows. Even though they're not perfect, features like Generative Fill in Photoshop or text-to-vector in Illustrator can significantly speed up certain tasks. When combined with ChatGPT and other text models especially during brainstorming there's a substantial boost in work efficiency. I've observed that designers often use Midjourney for creating visuals, which raises the question of how much these designs are subsequently edited or whether they're considered the final product. Relying solely on Midjourney outputs feels insufficient to me, because every good design should be backed by a well-thought-out creative process, which is especially important in the art direction process.
Experienced designers will likely approach integrating artificial intelligence into their work with greater caution to avoid the risk of subpar quality results. On the other hand, generated vectors prove very useful for routine tasks, like creating stock images or standard corporate vectors. If someone was previously producing generic vectors and now they're generated by AI, the problem might lie precisely in their generic nature.
I'd also like to highlight new opportunities opening up for UX and UI designers in connection with AI and human interactions. It turns out that users seek simple and intuitive interfaces in AI, unlike more complex interactions such as a Discord server using Midjourney or a Google Colab Notebook for open-source models, which can be challenging for some. I've also heard that Midjourney is developing a more user-friendly interface, which could change how we interact with AI for text-to-image generation. This shift raises questions about user freedom and the importance of transparency in AI usage, ensuring the interface doesn't hinder user-AI interaction while remaining transparent about the models used, their training, and datasets.
With the advent of artificial intelligence, many creators have split into two camps. Some fear for the future of their work, while others see AI as a practical tool, akin to someone using a calculator to solve a complex math problem. How do you see the future of artificial intelligence?
Yes, I'm also a proponent of the calculator analogy. I believe that those who fear artificial intelligence often undervalue their own worth and creativity. Human creativity stems from entirely different experiences. We have bodies that perceive the world, and we experience various feelings and emotions in response to our surroundings. There are also inexplicable states of mind that cannot be programmed or explained through AI. This represents a wholly different quality a kind of lived reality and we will always add something beyond that. When I design for people, they will understand it better. Yes, AI is gradually learning to acquire certain basics and adopting good practices, such as generating wireframes or other design elements. Still, I think human creativity is capable of transcendentally surpassing even these well-established practices, which is actually the essence of design. It's important to question what was previously considered "right" and find new directions. I'm not afraid that AI will replace anyone it will only replace the basic and generic things.
The key is for others to find their own creative workflow that suits them and decide how far to allow AI to intervene in their process. However, there's a risk that when using custom elements, AI might limit a person more than intended, because the result can be unexpectedly spectacular. It's crucial to be able to step back and reflect that the outcome is influenced by the dataset on which the generative model was trained. It's possible that the model draws from data we don't want to use, so it's essential to work with sufficient awareness and consciousness.
What do you see as the main ethical dilemmas you encounter?
The biggest ethical issue I encounter when working with artificial intelligence is the existence of unfair datasets. These datasets often lack the necessary level of transparency, which raises my ethical concerns. It's problematic that licensed content appears in datasets or that stereotypes are perpetuated through human-generated text descriptions of images. The ideal solution would be for institutions like libraries or universities to create their own datasets, which could help address many current problems.
Another issue is the rules of private companies, which may not align with ethical standards. It's important to realize that we're using tools and technologies we don't fully understand, and this can have unknown impacts. In the realm of creativity, these ethical questions may not seem so dramatic, but problems arise when AI is used in other areas, such as automating recruitment processes or in healthcare. Here, the ethical dilemma becomes fundamental. There's also the question of copyrights and attribution—how much work was truly created by a human versus generated by AI. This issue remains unresolved and needs to be addressed even in academic settings, for example, regarding students using AI for writing theses. There are many unanswered questions.
What would you recommend to young artists and designers who are starting to focus on artificial intelligence and want to integrate it into their practice?
To young artists and designers who want to engage with artificial intelligence and incorporate it into their practice, I recommend complicating the process as much as possible. Don't just rely on what AI generates from your inputs—trust your own visions and be active curators of the outputs. It's crucial to understand how AI works and intervene in the process. Try different interventions, experiment with errors, and bypass the system. Approach AI critically. For example, in the AI in Artistic Practice course at FAMU, I encouraged students to create their own modular approach to working with AI. They were to imagine AI as a partner for creative "ping pong."
When people visualize the typical collaboration process with AI this way, they realize human creativity plays only a small role in it. That's why it's important to inject more humanity into the process. We can interrupt the generative process, rework it manually, draw over it, or even insert something completely unrelated to AI. Such interventions and disruptions can yield new and more interesting results.






