Design Fixation with LLM based coding assistants
Are we in danger of missing the most creative solution?
Disclaimer: the times when I wrote code professionally for a living are over. I still do it regularly but I don’t consider myself a developer any more.
I started to play around with coding assistants (mainly Codium and GitHub Copilot) a while ago. Being a non-pro, I am not going into any arguments about productivity gains (having a rather good understanding of the nature of software engineering, I have my doubts) but I just want to elaborate a bit on a feeling I had when using the coding assistants that quite distracting but I couldn’t name it properly. Now I can.
Whenever the coding assistant proposed a longer piece of code, maybe a for loop or a short function I had the feeling that my solution space collapsed.
What do I mean with this? If you tackle a problem, at least one that is not completely trivial, a process of problem solving starts. Several variants of solutions emerge, spanning what I call the solution space.
Several variants of solutions emerge, spanning what I call the solution space.
Each of these solutions will have benefits and drawbacks and in the best case the process of navigating this solution space will lead to a near optimum (based on the currently available context). In most cases this is an iterative process based on discovery and learning - in other words, a creative process.
Now, as soon as the model proposed a solution to my coding problem, I had the feeling of this space collapsing, leading to a mental lock-in to the proposed solution.
Just recently I learned that this is a widely know phenomenon, at least in the design world. It is called design fixation. Imagine, you are a designer and you are asked to find a creative solution for some kind of visualisation. Research shows that if an example is provided then there is a high probability that the proposed solution to the design problem is strongly correlated to the example.
I learned all of this from an awesome paper that I found indirectly by reading a blog post from Baldur Bjarnason (highly recommended work, shouldn’t you know it).
https://dl.acm.org/doi/10.1145/3613904.3642919
The paper examines if LLMs-based picture generating assistants can help in the design process and if they could act as creativity boosters.
Results are not showing this, on the contrary. Based on the investigations done, usage of LLMs in the design process leads to stronger design fixation - the resulting designs have a high degree of alignmentthe variety of emerging designs is reduced.
From the paper’s abstract:
Through a between-participants experiment (N=60), we found that support from an AI image generator during ideation leads to higher fixation on an initial example. Participants who used AI produced fewer ideas, with less variety and lower originality compared to a baseline.
The authors analyze the reasons for this and one important finding is that the participants in the study use words from the problem description to generate prompts leading to some kind of conformity.
Another issue that the authors observed and is in alignment with previous research is that design fixation is increased if the level of detail in the visual example is very high. LLM based image creation tools generate highly detailed pictures with rich colours and textures so another design fixation process can be observed after generation of the images based on prompting.
Interestingly, the fixation is considerable stronger using LLMs than by using regular image search, although is also clearly visible in the control group using Google Image Search.
The authors propose ideas to mitigate fixation issues, for instance some nudging to be aware of fixations and reducing the level of detail of the model generated design ideas but it is clear that if we want to use LLMs as tools that should enhance our creative processes we have to carefully consider the effects of design fixations (potentially increasing biases built into the model).
From the paper’s discussion section:
Our work suggests that, at least in the current context of AI tool usage, given a fixed amount of time for a visual ideation task, this time is better spent sketching than seeking inspiration through AI. Our work suggests that generative AI tools aimed at supporting co-ideation should not only focus on generating stimuli but also on encouraging more effective ideation behaviours.
What does this mean for Software Engineering?
Design fixation is real - also when we get code proposals by coding assistants. The question is if this is problematic.
For trivial problems that don’t require creative solutions this is most likely not the case but we need to be very careful when applying coding assistants to more complex problems.
The proposals by the authors of the paper to alleviate design fixation, for instance reducing the details of the generate visuals, are not really applicable to the area of software development as they would reduce the utility of the coding assistants significantly - of what use would a coding assistant be that can only provide fragments of a solution?
There is much to learn before LLM-based agents will be useful assistants in problem areas that require creativity. Trusting them too much may mean that we miss the best and most appropriate solution for a given context.