I just returned back home from this year’s Fast Flow Conference in London. it was a great day so that merits some kind of write up. Having to spend a couple of hours on the airport plus some because of a delay definitely helped in getting this out.
Disclaimer 1: This summary was entirely written without the help of any AI. Any misunderstandings, omissions or hallucinations are my own.
It is always great to be in London - what a bustling city. My time there was short, but well spent by attending the Fast Flow Conference. The conference has been established to discuss topics related to Team Topologies and as I am and have been thoroughly interested in the kind of organizational patterns introduced by Manuel Pais and Matthew Skelton the decision to participate was quite obvious.
It was also a great opportunity to meet my fellow Team Topology Advocates and the other TT folks.
The conference took place for the second time and is already quite big. I would assume there were 200+ participants at a nice venue (CodeNode) right in the city center. Just shows how much traction and interest the topic has gained over the last couple of years.
All well set up, nice catering and there is one point I’d like to mention explicitly: the organizers took great care to have a balanced panel of speakers - diversity is important. I have never seen so many women talking at a conference. Really appreciated!
But now to the content - overall, the talks were of really high quality, all of them with important points to think about.
Disclaimer 2: This is not a tech conference, although it has many touch points with engineering. The word “Kubernetes” wasn’t said once - although I remember seeing it on one slide, I think from Manuel. Really amazing!
The talks
Obviously. most talks were about how flow of value can be improved in organizations - that’s what Team Topologies is all about - but it was obvious that a couple of new topics are evolving, enhancements or additions to the familiar set of organisational patterns.
Manuel Pais kicked of the conference with his keynote and some new ideas about platforms. Flow is not only inhibited by technical blockers but if the whole organization is considered, there are many non-tech teams/functions that can have significant impact on the flow of value. Think about legal departments, compliance, procurement or even leadership. Building platforms serving these functions (at least partly) with a strong focus on self-service and the reduction of cognitive load is an intriguing idea. A paradigm shift in the mindset of these teams definitely would be required but it would be a very powerful way to reduce dependencies and bottlenecks - core functions of a well-designed platform. And a welcome reminder that platforms don’t need to be technical in character, in their simplest form they could be documentation or a wiki, everything that reduces friction.
Matthew Skelton expanded on the ever-growing influence of software systems. In many way they are now permeating the fabric of our daily lives and their stability and reliability or their lack of it can cause considerable damage or literally endanger lives. Think about software systems in health care, social services or other critical areas. To assure the viability of these systems over their long life time (in many cases decades), Matthew introduced his concept of “software stewardship”. Moving away from project thinking to a completely different, more holistic approach - stewardship in the sense of responsibility for the “well being” of the system over the whole life cycle.
This idea has a lot of deep implications as it challenges the exisiting paradigm of short term profit focus to a more, dare I say sustainable way of caring about our systems. Definitely an approach I would support and I will follow with great interest.
Touching related issues was Sarah Wells with her talk about the challenge of larger system migration. She mentioned a lot of very important points - starting from empathy for the users (or maybe even creators) of the old system to making sure there is a good value proposition for the migration honestly considering all costs as far as possible. Besides migration costs, one also needs to consider the opportunity costs of functionality that could not be delivered because of the migration effort. My personal highlight was the reminder that a migration is only really finished when the old system is switched off. Usually, we are really good at building new stuff but not so good at getting rid of old stuff increasing complexity and maintenance effort in the process.
I already heard Susanne Kaiser’s talk about the combination of Wardley Mapping, Domain Driven Design and Team Toloplogies before, so no news for me here. But hearing it again just confirmed that these practices and concepts are quite a natural fit. Wardley Mapping helps to figure out what has the highest strategic importance, DDD practices can guide through a meaningful decomposition of the system and help with the tactical aspects while Team Topologies patterns can then be used to design an organization that has a high likelihood to achieve the desired outcomes.
Varuna Venkatesh in her talk shared her experience with setting up tactical working groups built from members of different teams that can be established ad hoc to jointly work on technical impediments or actual problems that may be caused by cross-team dependencies. She called these working groups “short-lived enabling teams”. Not sure this is a very good name but the general idea is intriguing.
Working group topics can be proposed by anybody, work is time boxed by 1-2 weeks and the topics need to be presented in compelling way - they need to be pitched. If a topic is picked, a core team of contributors volunteer and they then establish in a joint session a common understanding of the problem and discuss possible solutions together with measurable outcomes. During these 1-2 weeks, the working group is dealing exclusively with the topic - the core members step out of their home product teams. This is possible as the product teams typically have around one person’s capacity as slack to cover exactly these kind of activities.
From my point of view this is a very interesting way to prioritize organizational benefit over individual team progress and the time box just assures that product development is not impacted too much. The approach seems to work quite well according to Varuna and also creates quite a lot of visibility for the core members of the working groups. They do high impact work so it is usually easy to find volunteers.
Cognitive Load is a key concept in the Team Topologies universe. Although the idea that our brains have limited capacity and that we get into trouble if we come close to these limits is intuitively clear to me, there was some criticism over the last couple of years. Not to the concept itself but there were questions if Sweller’s cognitive load theory (cited as the scientific base for team cognitive load) coming from Learning Theory really can be applied to software development activities and specifically to teams that deliver software. This extrapolation, as I said, made intuitively sense for me but I was very curious to hear what Dr. Laura Weis and Aleix Morgadas had to say in their talk about the science behind team cognitive load.
After the general introduction and a short description of the bad impact of continuous high cognitive load, Laura expanded a bit on the scientific base. The most important message for me was that not only Sweller’s Cognitive Load Theory needs to be considered if we talk about cognitive load but also other theories from psychological research that deal with topics of mental capacity. Still, we are not talking about an exact science here so it really difficult to measure the cognitive load of a team. The most promising approach is to figure out some drivers that are associated with a feeling of exhaustion and disengagement, which ultimately will lead to reduced performance and in the worst case to burn-out.
Laura and Alice came up with a comprehensive set of drivers in multiple categories that have been tested with teams and have shown strong correlation with the feelings described above. Which means that you can do a survey with teams, check these drivers and get a good feeling where the team stands with respect to cognitive load.
This approach is called “Teamperature” and will be available as a service that can be used to sense problems. Done regularly, it can signal developments in the team’s cognitive load that may require some intervention.
I think it is good that the discussion is not focusing only on Sweller’s Cognitive Load Theory anymore. In one of my talks, I mentioned Dave Farley’s statement that “software engineering is a discipline of learning and discovery.” Considering this, I have no problems applying concepts from learning theory to software development but that is just me. Applying “Teamperature” to some teams in my company is definitely something I will consider.
Steve Pereira briefly introduced some basic ideas from his new book “Flow Engineering” that he wrote together with Andrew Davis. He stressed again the value of collaborative modelling to create a joint understanding about potential bottlenecks in our value streams. If not all of the stakeholders are involved in the modelling process we may miss crucial insights. He also showed how the process of mapping specific part of the value stream to team responsibilities may help to use Team Topologies organizational patterns to unblock flow.
By the way, the “Flow Engineering” book is really interesting and it also includes a lot of very practical advice on how to structure the mapping processes. The actual value stream mapping activity is only one part of creating a joint understanding of the current and the target state. I am convinced that these practices are a very valuable tool in the toolbox of all coaches (specifically engineering coaches as myself) and I will write a more comprehensive review of the book later - I promise, Steve!
Daniel Terhost-North continued with his ideas about how an well-designed organisation should look like. He questioned some of the conventional wisdom about team setup and development.
Following his critique of “stable long-lasting procut teams”, he proposed a more fluid organisation with dynamic team composition along the lines of H. Gelfand’s “Dynamic Re-teaming” ideas - all governed by a set of constraints that are also constantly updated to better reflect the needs of the organization.
Dan mentioned that “stable” does not mean static and I full agree here. Some dynamic in team composition is useful to spread knowledge and support collaboration across organizational boundaries. This needs to be managed in a structured way.
“long-lasting” is also something that needs to be discussed according to him. Conventional wisdom explains that the “long-lasting” part is required to create the environment of trust and psychological safety that is required to really innovate and experiment without fear of failing. The foundation here is Tuckman’s model of team development (you may know it as forming-storming-norming-performing). This model was readily debunked by Dan North, he explained that teams can just skip some of the phases and in some cases, are ready to perform almost immediately, if they are aligned on a common set of values and a shared purpose. One example may be the “working groups” explained by Varuna (see above) that do high leverage work in a timespan of two weeks without any significant storming etc.
I did some quick googling and it seems that there is some criticism about Tuckman’s model but it is not considered as obsolete, in the way that Dan North explained it. I don’t know - I have the feeling that in the case of Varuna’s working groups there is indeed some norming and maybe also storming taking place within the bigger organisation and obviously there is (or at least should be) some shared values, we are talking about people working at the same organisation. So there is some alignment happening on org level, the working group doesn’t need to star from scratch.
Having said that, he also explained teams facing a lot of different demands for their capacity: feature delivery, learning/experiments, process improvements, run the business, and failure demand (incidents, bug fixes). He mentioned that the popular development frameworks/processes like Scrum only care about the delivery part and the rest of the demand is more or less not covered. A huge fallacy as the actual effort generated by the other demands is often significantly higher (and in many cases non-negotiable, think about incidents!) but not visible. This creates a lot of wrong expectations - a very valid point!
Dan presented his thoughts in a very interesting way - he had a couple pre-prepared hand-written screens on his iPad (probably using something like GoodNotes or similar) that he annotated during his talk in real-time. Something that requires a lot of confidence, I thought and he did it in a good way but I am not entirely convinced that this technique is more fluid than a well crafted presentation (the well-crafted part is important). All in all, certainly an engaging and thought provoking talk.
Last but not least, Kenny Baas-Schwegler explained how important it is to understand product design as a process that also needs to involve developers. A really autonomous team covers the whole product delivery cycle and that obviously also include the early phases. Figuring out customer needs and doing early experiments are special skills and using Team Topologies patterns can help to provide access to these skills in the form of a complicated subsystem team using the collaboration or X-a-a-S communication patterns. Kenny again stressed the potential of collaborative modelling, specifically DDD, to create a common understanding across all roles in the delivery process.
I am huge fan (at least in theory) of collaborative modelling, I find it a really convincing concept and would love to have it in my toolbox (see also the remarks above about Value Stream Mapping) but I have no experience with it and have huge respect for everybody being able to facilitate these sessions. Kenny mentioned that in many situations every team member can facilitate but he was very honest about the difficulties and explained that there is a lot of stuff going on beneath the surface if a group of people comes together to create joint understanding. Personal relationships, biases, conflicts etc. - all is there and can greatly influence the spirit and outcome of a modelling sessions. I am still convinced but I am not sure my willingness to jump in to the cold waters of collaborative modelling has largely increased. But maybe I just need to try.
I was not able to follow all sessions, there were some parallel tracks that also sounded very interesting but there is just so much one can do.
Some additional remarks
Just some additional short remarks not related to a specific talk:
A lot of conversation about the human aspect of work, how to make our system sustainable. Very much aligned with what seems to be important for myself.
AI was not a big topic. This was completely different from the conferences i have been lately where each speaker felt obliged to include some AI topics into his presentation. Loved it. What I sensed (from the talks but also from some conversations with participants) was a general scepticism with regards to AI or better the overblown expectations that are fuelled by parts of the industry. Maybe not a surprise for a conference that is also built around an understanding of our organizations as socio-technical systems but again, quite aligned with ways of thinking.
Hope this summary was useful - it definitely will be for me as a collection of notes and thoughts from a good day filled with a lot of interesting intellectual stimulation.
Big shout out to the organisers and all of the speakers. See you next year!
I heard that some (all?) of the talks will be available on YouTube soon, so stay tuned.
Thanks for the splendid write up. You must have been one of the many people taking copious notes from the excellent lineup of speakers. A very well planned and executed event.