The power of distance

“Look within Grasshopper, and you will find the answer you seek”

Said the guru to the traveller

When I read books on self improvement and EQ, one of the first things I often encounter is the recommendation to “pay attention to how you feel and react.” I interpret this to mean, that we should notice how we are responding to what is going on, in the moment, to observe whether we are tense, angry, opinionated, or even excited.

Looking inside yourself and listening to your body is great. You start to notice your triggers, your potential bias and in particular your emotions.

Noticing your emotions means that you can manage them rather than being managed by them. it also means that you can start to notice the outside triggers and environmental factors that are driving a lot of your behaviour.

By developing a sense of presence, or self awareness, you can move from mindless reaction to mindful action.

But what about the opposite?

What if we stop being present within ourselves, we stop “being in the moment” and instead we start to look at things from a greater distance, without paying much attention to how we feel right now at all?

Is that anti-coaching or is that another source of insight?

I claim that the power of distance is, potentially, as useful as the power of being aware our selves.

Moving our focus

Here is a simple trick that you can try.

Look out a window and think about what you see. Now look at the window itself and observe what you see.

I imagine that when you look at the window, you see the window but not the landscape outside, while when you look “outside the window” you see the outside landscape but lose your awareness of the window itself.

We can also try the same exercise, but with solving problems and untangling dilemmas.

As a thought experiment – Are you more likely to see the flaws in an idea that you are explaining to a friend, or to see flaws in the ideas a friend is explaining to you?

I imagine that you find it easier to see flaws in the idea someone else is explaining and I believe it is to do with how “close” you are to the idea. When you explain your own idea, you are focused on the idea, but when you are listening to someone else, you are thinking about counter ideas, different contexts and different situations that might support or challenge the idea.

But we don’t always need a second person to help us do that.

When I coach people and they seem stuck in their thinking, I sometimes ask them what they would advise a friend to do in the same situation. It often causes people to pause and consider things from a different perspective – a more distant one.

This is not me giving them my perspective, but rather them asking themselves what perspective they would take if it was not them that was facing the dilemma.

Moving further away from a problem or dilemma means that we sacrifice detail and emotional depth, to gain a broader perspective, to see the bigger picture more clearly.

Asking what advice you would give a friend is one way to do this, but I could also ask:

  • What was your goal when you started? What is your goal now?
  • When you complete this, then what will you have?
  • How would other people in the team describe that?
  • What would someone brand new notice, that we might not be seeing?
  • How do people solve this in other organisations?

None of these are trick questions, they are just attempts to look at things from a more distant standpoint, to see if we remove the noise of our own history, doubts and emotions.

Once we see things from a distance, we might move to action or maybe we might move back to reflecting on our own perspective, by using questions like these:

  • And how did/does that make you feel?
  • And what is the real challenge here, for you?
  • And what do you want?
  • And what has been your contribution/responsibility here?

Again there is no trick here, just pausing to move from looking at the landscape to going back to looking at personal agency, feeling and accountability.

So both sets of questions are really about deliberately moving further from a dilemma, in order to gain perspective, or moving closer, in order to focus on the detail of our own reaction, action or emotion.

Moving our perspective of time

Chip and Dan Heath provide a simple thinking routine for creating some distance from a decision or problem in their book “Decisive.”

The routine is to ask yourself three questions:

Suppose you did decide to build this feature. Ask yourself:

  • How will you feel about this decision in 10 minutes?
  • How about 10 months from now?
  • How about 10 years from now?

Once again, the idea is to escape our current focus and to consider things from a greater distance, or more precisely a longer period of time.

So what?

When seeking to grow or find a new path, I can focus on my own feelings and a sense of presence in the moment.

Alternatively though, I can step away from myself and observe things from a greater distance. Sometimes it is that distance that allows me to see things that I was not seeing before, or to see patterns that I was not aware of.

One way to step away from myself is to seek the opinions and insights of others, but another way is to just move my own awareness to see things from a distance.

Florence Nightingale, Agile Coach, Data Scientist

As an agile coach, I like to think that I can help teams to see their work in a new way and hence start to find better ways of working. I also like to think that my “Agile Experience” gives me an edge that people in the team will not have.

However I am sometimes humbled by the breakthroughs that teams make without me, or at least the insights that they have, that I did not see coming.

Even more, I am humbled when I see people in roles that I would not consider “agile,” where people work in an environment that I would not consider agile, yet where they display an amazing agile mindset AND matching set of practices.

One example was when I found out just how agile modern kindergartens are. Another is the nursing profession, where they have been having standups, visual management and self organising teams since before I was born.

I don’t know if many teams have a more efficient standup than the nurse/doctor shift change and I have seen few story walls as effective as the “big wall” that I saw when someone showed me an operating theatre planning space. Beyond that I cannot think of a self organising team that need to focus on quality more fiercely than a an operating theater team. There is not much point in the surgeon doing a good job if the anesthetist does not and I don’t think the team have the option or re-planning scope and deadlines while they put the operation on hold.

Anyway – I often think that, while we still think agile is new, people like Florence Nightingale were pioneering a lot of our “new practices and mindsets” quite some time ago.

Today I learned a little more about this amazing agile coach who made it her mission to revolutionize medicine (or more specifically patient care). Today I learned that she was also a data scientist who struggled to get others to recognise the valuable insights she wanted to share, based on robust statistical data that she had.

In order to communicate the “complex numbers” to her highly educated but data-poor stakeholders she had to quietly innovate new ways to visualise data, even without having a BI budget or data warehouse/BI tool.

If you want to be just as amazed as I was, you should read this article from Scientific American https://www.scientificamerican.com/article/how-florence-nightingale-changed-data-visualization-forever/.

I guess we can agree that Florence Nightingale was an amazing woman, but I think there is another lesson here for us agilistas and smarty-pants change managers.

Florence Nightingale was probably doing amazing work without being given the recognition she deserved in many situations, because she was not a general, a doctor or even a man. If that is true then there is also a risk that there are multiple Florence Nightingales in our own organisations, innovating and delivering amazing results, but without knowing what agile is, or being certified CTA (Certified as Truly Agile).

Even more likely, there are people doing great work on a smaller scale, without it being on our radar. I think this is probably one of the greatest lost opportunities for change – to identify the “bright sparks” who are already solving problems and creating value in our organisations.

What I take away from this is that we really need to identify the existing people who are making potentially small changes and give them more support, rather than focusing on the flaws of the organisation or the promised improvements of a new Framework or Way of Working.

Rather than building something new, let’s identify and strengthen what is already budding in our organisations and help people to flourish. We may not encounter Florence Nightingale but I bet we will find smart innovators struggling away in many places if we look hard enough. Especially if we look at where the work is being done on the ground, rather than looking at our own recommended changes.

Assessing our assessments

I have been assessing some teams recently, in order to diagnose where they can create further improvements in their performance. This assessment will be valuable in helping the team decide where to focus their attention in continuing their growth.

Really though? Is both those statements really true? I guess the following needs to be true for the assessment to be worthy of their consideration:

  • I am doing some assessment of some kind;
  • That assessment is designed for the team to use in improving their performance; and
  • The assessment is good.

Of course, whether the team actually use the assessment and whether the usage of the assessment leads to improvements are still not decided, even if the above statements are true. I won’t look at these last points yet though, because these are about change management and I should only worry about the team making use of the assessment if it is actually good.

Assessment 1 – Am I doing some assessment?

There may be reasons to believe that the world is an illusion or that I am a butterfly dreaming that I am an agile coach doing assessments. Maybe soon I will wake up flapping my wings around and eating flowers, thinking to myself:

Wouldn’t it be lovely to be in a world where I really was a human, assessing team performance.

Alas that was just a dream. Back to the daily grind of fluttering about looking for nice flowers to land on so I can enjoy their nectar.

A butterfly, waking from a dream about being an agile coach

OK – let’s start by creating some assumptions. Let’s take it as a given that the team and I exist and that I have done something I called an assessment.

Even then – the range of assessments I could do is probably huge.

So let’s confirm that I have also determined the scope of what I am assessing, which I have.

If, for example, I didn’t know whether the team has a clear purpose and a decent level of respect for each other, then I might be wasting my time assessing the frequency of meetings the team has, or the alleged velocity in previous sprints. Similarly, If I already know the team is an established one, with a clear purpose and team members who respect each other, then there seems little point in assessing these things, since I already know what the result will be/

So what I need to decide before I start assessing is:

  1. What will I take as “a given”, which I will therefore not assess. For example, this could include the assumption that the team is working on the right goals, if I am assessing the way the team breaks down it’s backlog.
  2. What I will actually be assessing now, which is hopefully useful.
  3. What should be left until later? Perhaps because it is less important or because I can learn and apply something from a shorter assessment rather than assessing too much at once. The same way the team slice their stories into thin slices of value, I should slice my assessments into thin slices of useful information that can be acted on.
A triangle to help me decide what to focus on right now

Cool – I know what I am assessing.

Assessment 2 – Is this really for the team?

I am feeling a little philosophical today, so let me ask a big, abstract question – Why do any assessment at all?

As with all things agile, if I say that something is valuable (and therefore potentially worth doing), then I should be able to say WHOM it is valuable to.

  • Who actually gets the benefit of this work?
  • What makes it valuable to them?

Potentially, I could share the assessment with any of the following people, who would gain some kind of benefit from me sharing it. Depending on who it is and what they want, the assessment I do might be quite different. I might assess the team:

  • For the “team selector” who creates and maintains the team and wants information to support them to:
    • Select who is on the team
    • Select who on the team is assigned to a project, mission, game or training course;
    • Assess people for promotion, bonuses, elite training; or
    • Assess people to design a curriculum for training in areas needed, potentially customized to different needs
  • For the “bill payer” who wants information to support them to:
    • Understand whether the goals the bill payer has paid to achieve are actually being achieved;
    • Understand where the costs in time and money or being consumed by the team;
    • Influence what the team will strive for and what they will ignore or avoid
  • For the people who are being assessed, in order to:
    • Help the learning of those being assessed
    • Create a sense of satisfaction and momentum
    • Clarify and set goals and standards to aspire to
    • Gain a certification that demonstrates their accomplishments and qualifications.

So that clarifies some things for me – the assessment should not be an end in itself but should be something that adds value to someone. Of course it could become too broad, if I aim to use the assessment to meet all the needs of all the people.

In the specific case that I am thinking of, I could do assessment for the coaches and managers to make decisions about what “curriculum” to create for teams, or I could do an assessment for the team members themselves to learn where and how to improve. Both might be valid goals but I it might be better to choose whether to optimize for one of those goals rather than hoping to kind of achieve both.

Maybe I should even have some user stories for my assessment:

This assessment will help (who) to do (what) so (there is some benefit); or

This assessment will provide (who) with (what insights, validations or information) so they can (make what decisions, or improve against what goals)

Says the coach, just before conducting the assessment

In this case I chose “This assessment will provide the (specific) team with insights about the way they work together so they can set better improvement goals for themselves.”

Defining a goal like this sets me up much better than saying “I will assess the team,” and even better than if I said “I have this health check so I guess that is what the team needs.”

So lets check-in. I definitely did some kind of assessment on a team. In fact I even knew who would get value and what that value should be. Finally I had a big triangle to wave around (or more accurately I was able to say what was given, for the purpose of the assessment, what I would focus on and what I would leave for later).

If I know this much, I should be good to go- but there is still the question of whether the specific assessment I perform will achieve my goal.

Assessment 3 – Is the assessment a good assessment?

Academics, scientists and quality freaks have done a lot of good work to help us define what a good assessment looks like. Let me list the key things I have taken away from the research, which I personally think define a good assessment.

You do not need to nail each of the following but you should define how important each is to you when you are doing your assessment.

Reliability

Will the assessment give the same results when conducted multiple times on multiple targets (or maybe “teams” is a better word)?

For example:

  • If I assess a team multiple times, and they are still performing at the same level, would my assessment give the same result each time?
  • Will my results vary depending on the time of day, where they are in their current sprint or the stage they are at in their quarterly rhythm?
  • If I assess different teams, who are performing just as well as each other, but who are using slightly different tools and techniques, then will I get the same result?
  • If there are multiple assessors, will the result depend on the assessor, or can people expect the same result from each assessor?

Since I was doing doing a single assessment on one team, for their own learning, I gave less attention to this factor. On the other hand if I had been assessing multiple teams at multiple locations to create a shared learning agenda across teams or a report on where to design coaching for multiple teams, then this becomes a lot more important.

Validity

Is the assessment actually measure what it is supposed to measure?

It is surprisingly easy to build measures that do not actually measure the right thing.

For example, if I say I am measuring quality of work or teamwork, but I measure velocity or speed, then this might not be correct. Speed might increase as a result of good team work or improved quality, but it might also increase because of less team work, reduced testing or building features without understanding quality from the users perspective.

Perhaps more subtly, if I am measuring “defects the team found and decided to fix” as a measure of quality, but I define quality as maintainability or user experience, then I am measuring the wrong thing, even if people see defect fixing as quality. Instead, perhaps, I should be measuring the ability to maintain the system or the experience of the user to assess, if I want to assess quality in these cases.

I might also use a measure that gives a false reading, even if it reliably gives the same false reading every time. For example, if pay rises are predicated on “displaying an agile mindset,” and I ask people, just before their pay review, whether they have an agile mindset; then I think that I will reliably receive the answer “yes”, regardless of the mindset being a fixed or growth one.

Validity was important for my assessment this time because the team will make decisions on the result. However there is a mitigation in that the team will debate my assessment as a group.

Acceptance

If reliability and validity make a measure or assessment useful, it is credibility that determines if it is actually used.

If people do not understand or accept the score, number, rating or opinion that comes out of the assessment, then they will not act on the results.

For example:

  • I believe that “share of voice” is a good measure or team empowerment and effectiveness. This reflects whether everyone in the team gets to talk just as much as each other. However I have found that sometimes if I point out that only some people were speaking, team members explain to me why they think that was valid. They justify the rating rather than considering it as a thing to assess and maybe change. They may have a point, or I might be right, but either way it is a poor assessment if it will not be understood and used by the consumer; and
  • When senior managers learn what velocity really measures, they often question whether it is a measure of team performance (which it is not). So while velocity is a good measure for the team to use in predicting what they can achieve, telling executives that the team is consuming stories at a good rate of points, they are likely to be more baffled than informed.

I have found this more problematic when an assessment is likely to challenge existing views and biases. If I rate a team as bad at cooking when they have a reputation for being great cooks, then people need a lot more convincing than if I confirm their existing views.

For the team I am assessing, I might want to make sure my assessment results are easy to understand and also credible.

Decision support and Educational Effects

So my assessment was (I believe) reliable enough, valid with some level of precision and accepted by the team.

However, if the team are going to learn from the assessment then it must be well designed to help them learn. This is again a matter of context. For example if I was a school teacher and I was students in a final exam (when they should know their material), then the assessment need not provide much support for student learning, but the education effect would be critical when I was using formative assessment during the semester.

In this case the assessment I am doing is literally designed for team learning, so the ability for the team members to apply the results to their learning is the most important criteria for success. This means my results must be simple, related to what the team wants to be good at, timely and helpful in clarifying a next step.

Some questions that help here are:

  • Does this assessment help to clarify team goals or the goal of what we want them to do?
  • Does the assessment make it easy to identify what was good, what could be improved and what to do next?
  • Is the assessment timely – is the information still relevant and more importantly, provided in time to reflect on and change the habit, behaviour or output?
  • Is the information provided simple, specific and clear? Or is it cluttered, overwhelming and vague?

Cost (or efficiency)

I could do an amazingly detailed assessment of the team, brining in multiple coaches equipped with video cameras, regression testing, fitness trackers and heaps of technology. I could even fly the whole team to a specialist lab in silicon valley somewhere, in order to participate and a herculean set of simulations.

Of course the cost of doing so would be far higher than the potential benefit to increasing performance.

The best assessment would be real time, created by the team themselves as part of the work, without any delay or other impact on what they are doing.

As much as possible, I like to create a way for the teams to better assess themselves in the moment of their work, rather than having me audit them.

In this case though, I spent some of my time (and theirs) in assessing them and communicating the result. So it was important to make sure that my assessment consumed just enough time to create useful lessons.

It is hard to know in advance, with certainty, whether the assessment will be worth doing, but we can make an educated guess. For example I know not to do a whole battery of tests if I know the main challenge for the team is that they have really bad retrospectives. I would be better off just assessing the retrospectives and then focusing more time on helping make improvements there.

What to take away from all this reflection

So, for me to assess whether any assessment I want to do is going to be worthwhile:

  • I should know who needs something from the assessment and what they need;
  • This should help me decide what I should take for granted, what to focus on in my initial assessment and what to ignore for now;
  • I should design and assessment that is sufficiently:
    • Reliable
    • Valid
    • Accepted
    • Supportive of ongoing learning (and/or decisions)
    • Cost (and time) effective

When I am done, I can assess whether I achieved these goals to “assess the quality of my assessment” and get better at assessing in the future.

What has been left out of this assessment of assessments?

I have not assessed whether the tam used what I shared or whether doing so proved useful to them.

Perhaps that is the next thing I should assess. Given I have shared some results, how useful did sharing them turn out to be?

With all this assessment of assessments, should I be able to do even better assessments?

Of course I hope to continuously improve my assessments. However sometimes when I think about what assessment to do I actually go the other way and drop the whole concept of assessing teams.

Instead, sometimes I will postpone my assessment and just observe for a little while longer without judgement. Sometimes curiousity is a better coaching tool that judgement, assuming that the curiousity leads to information that is shared with the team.

Then I might come up with a hunch, which leads me to a hypothesis, which leads me to wanting to conduct an experiment or some kind of assessment, in which case I am right back at the start of this article again.

Daily reflections for positive change

I just completed a course on positive psychology. More accurately, I just completed a a Coursera specialization made up of 5 courses on positive psychology.

It was a great course, with some great theory and a lot of meaningful practice. It was full of small things you can do to improve your life and also your coaching of others.

An exercise that I performed in the final course

One of the exercises was to create a “testable positive intervention” for myself.

In order to do that, I had to identify a bad thing that I want to improve about myself. To do this I used a list of “shadow strengths” or a list of overuse, underuse and absence of strengths.

Once I identified an area to improve, I needed to

  • Measure my current performance/happiness;
  • Do some intervening with myself; and
  • Measure my new level of performance/happiness to see if it had an impact.

The result was surprisingly good.

What did I want to improve?

My goal was to improve 2 things that related to a shadow strength of ingratiating, which is an example of overusing appreciation. I reflected on some surprising confusion about expectations and what I thought were agreed goals or actions and how these might have been related to appreciating the good without digging into things I do not appreciate, or things that need to be done.

The two goals I landed on were:

  • Set better, clearer expectations and increase accountability:
    • Set clear expectations of Myself and hold myself accountable to them;
    • Communicate these expectations better to others; and
    • Be more explicit in my expectations of others and where I might disagree with things they say, do, or plan to do.
  • Pause more when in a conversation in order to listen instead of talk.
    • Specifically to count 3 seconds sometimes after talking no more than 40 seconds.
    • I found this relevant to the first goal, because the lack of silence meant a potential lack of shared understanding of expectations.

What did the intervention look like?

A standard to aspire to

One of the practices that the course recommended was to leverage a strength to build improvements.

One of my “signature strengths” is “Genuineness, Authenticity and Honesty.” I decided to use this because, if I claim to be honest and authentic, then it should follow that I am also communicating my views clearly.

Also – I think my authentic self is a good team player, which would suggest that I can listen to others and that I can communicate authentically. If this is the case then I am not trying to become a different person with the above goals, but rather being the same person in the moment that I want to be all the time.

So now I have a positive standard to work towards – someone who has authentic conversations and sets clear expectations – me already on a good day and maybe not me when I fall short of who I want to be.

Daily measures and observations

I used a well known “3 good things today” exercise which, not surprisingly, involved reflecting on three good things that happened each day. This is a great exercise, but not surprisingly, it does not always highlight the gaps or progress I made with my gaps that I am working on. However it kept me focused on good outcomes and positive observations.

I complemented that practice with a report card for the day on my successes. This is designed to remind my of successes but also get me to focus on the situation where success was possible. On most days I had some successes (yay) and a couple of misses. This tool got me to highlight when I did not feel I was successful or I was not happy with “my involvement and my challenges.” It worked well because the expectation that I will be successful combined with concrete example from my day to see if I hit my goal.

Planning to observe and practice in the moment

Now I had a daily reflection to review my success, lack of success in adopting my improvements.

What I also needed was a way to actually observe myself in the moment so I could collect the information to reflect on. I also needed a chance to pick when to actually try to change my habits in the moment.

To do this I coopted a set of questions from the book Presence Based Coaching, which I have previously used in creating habit stories for myself and others.

I started the day with a todo list (I use a bullet journal approach) and then selected 1-2 meetings for the day when I would focus on applying my self improvement. At the beginning of each meeting I would check the following questions and try to be aware of them for the meeting:

  • In this moment what is driving my choices? (“in this moment” could mean the conversation i am having or meeting I am starting);
  • Who am I right now (or what do I see as my role here)? How would I act if that was who I am?
  • What am I actually doing?

Then at the end of the meeting I noted the answers to the same question, but restated in the past tense. Then I took a quick not of how I felt about it.

This data (set of rough notes) then gave me something to reflect on at the end of the day, when I did my accomplishments report card. I think the act of reminding myself of these questions also made me more likely to push myself to improve.

A rapid reflection cycle

So the whole cycle looked like this:

  1. Bullet list at the start of the day, with a cup of coffee and a note about a couple of meetings to focus on when practicing my better practices.
  2. Reminder before a couple of meetings to act better, with a note on how I went.
  3. Reflection at the end of the day with the accomplishment report card,
  4. A quick follow on with a list of 3 good things from the day.

Measuring the result after 2 weeks

I kept all the daily reflections in a google document so I also reflected briefly at the end of each week. This gave me a qualitative view of my progress.

There were a couple of self-reflection surveys that the course included. I scored myself on these at the beginning and end of the exercise. There was also a shift here, whether it was permanent or temporary.

Overall I noticed quite a shift, but I still have to turn things into a habit, so I guess that is my next step.

I think I will stick with these two simple goals for another couple of weeks before I improve anything else. When I do move to the next improvement I will go back to basics and design a new routine/intervention.

The approach worked really well for me. I think the use of a strength as a standard and thing to leverage worked and so did the ongoing focus of my attention.

Scrum for one? Not so sure.

I work with a team who cancelled their showcase.

Due to a combination of holidays, vampire infestations and other one off occurrences which led to only having one developer available for the sprint. The developer and the stakeholders did not want a bunch of people, all looking at the poor, single, developer and asking what he had been up to during the sprint.

I guess that makes sense, after all Scrum is a team sport. A showcase and retro with one person seems an bit over the top. In fact, even with 2-3 people the overhead of scrum seems excessive.

So I got to thinking, what would I utilize if I was on my own? Would I have a sprint? Would I appoint myself PO and Scrum Master and have a backlog that I was in charge of? Would I have a daily stand-up to share the impediments I faced with myself?

I think I would draw the line at having a meeting with the one amigo to break my stories down together, but I would probably still want to track my work.

I know people who use a Kanban wall instead of a todo list to keep on top of their work. I think I might stick with the todo list but the value of focus and transparency still counts.

While I would certainly track my work visually, I would not bother at all with any of the scrum roles. I would just be me. So work visibility is in and role definitions is gone.

Would I have a sprint? I guess it depends what kind of work I am doing.

When I do creative work I often use the Pomodoro Technique, which is essentially 25 minute sprints with 5 minute breaks and then going for a walk or getting a bit to eat after a couple of hours. That is kind of sprinting but it is not a feedback cycle outside the single pieces of work.

But maybe a weekly goal setting session is a good idea.

Actually, I have found a lot of success with WOOP based goal setting for big pieces of work. That is where I set an optimistic goal (a wish) and then imagine how good it will feel to succeed (the outcome). Then I imagine the impediments that will stop me achieving the goal (the obstacles) and finally plan what I could do if the obstacles occur (plan).

So I would start my week with a goal and with a plan to respond to my inevitable distractions.

When I put it like that I wonder why some Scrum teams begin the sprint with a goal like “we will complete the list of stories” when they might be better off agreeing a goal, imagining success and then predicting and planning for the likely threats to success. Anyway, for my one person team I will focus on having a goal that is more than just completed todo items.

But I don’t have a formal planning meeting. I can replace that with a cup of coffee and some goal setting, followed by a walk or a snack.

Is that really enough though?

Maybe sometimes I should do some continuous discovery as well. Instead of just assuming I know what people want, maybe I can stroll over to talk to them and ask some questions about what they want.

I can take my whole team with me, since it is only me. When I have some work done I can also take it with me to show my customer in another visit. Or I can skip that if I am just bashing through some work.

Now I can spend the rest of the day getting on with my work.

Next day though, it is probably time for my first stand-up. Or more likely my sit down with a coffee.

I will get out my todo list, tick some things off (or slide my post-it notes across my desk) and then confirm what I plan to do for that day. Then I will go for a walk or get a second coffee, before setting up my Pomodoro timer and getting stuck into the work again.

But wait, maybe I should have a definition of done or some acceptance tests. I don’t think it will take a lot of debate with myself, but for each thing I plan to do I should know what standard to achieve and what success looks like.

Being the entire team of the one amigo though, I think I will sometimes say that I am starting a piece of work without a clear conclusion. I won’t call it a spike or and MVP, I’ll just call it something I want to do. If that is the case then I will define success for my sprint (the outcome in WOOP) and then decide which other things need a definition of done. Nobody is watching me so I will create a clear outcome when it is worth testing against and an open outcome when I am exploring new ideas.

After several daily sit-downs and a bit of work, I will reach the end of the week. Should I reflect on what I have done?

Actually I do think that is valuable. I will run through what got done, what didn’t and what isn’t really working for me.

I also find it valuable to remind myself of what I have accomplished and maybe where I stuffed up (fell short of my expectations of myself and my goals); which goes beyond just reviewing what I have crossed off todo list.

So I will do that. I will call it my weekly reflection with a glass of wine or sometimes just a pen and paper. The documentation will either be nothing or a work journal with notes in it. Probably the latter if I only have one glass of wine :).

My artefacts are now a todo list and a journal.

So I have ditched the roles and the mystique and I am left with:

  • A goal for the week (or whatever stroll length I choose);
  • An expectation that I will encounter impediments and a plan for some of them if they happen;
  • A way to make the work visible – todo list, Kanban board or whatever;
  • A daily sit down with a coffee and a review of my todo list;
  • A definition of done for some things;
  • Potentially a visit (or zoom chat) to someone I am delivering the work to; and
  • A reflection on how my week went.

That doesn’t sound too far off Scrum, if I forget about eating chickens and eggs or having people walk around calling themselves master or owner or things like that.

If that works for 1 person though, should it work for 2? If it works for 2, should it work for 3? At what point would I actually move from strolls to sprints and sitting down to standing up?

Can I have a team of 5 who set a weekly goal, stroll over to visit people when they want some input and sit down for tea or coffee each week?

When should I start using a burn down chart, a cycle time average or a scrum master? When should I use formal ceremonies? Is it just to do with the number 7?

In other words, is my decision to use Scrum vs Stroll-approach based on the the number of people in the team; where a single stroller works alone, but somewhere around 5 you need more processes and artefacts?

I don’t think it is just about numbers though, there must be a lot of other factors.

I work differently when building a course to when catching up for 1-1 coaching. So I would adopt my stroll framework a little depending on the kind of work I was doing. I also work differently when pairing with some people to when I pair with others.

I wonder now, what factors beyond the number of people I have in my team, should lead me to adopt a different path to creating my way of working?

Guessing is the absence of research

I saw a refreshing take on research recently and I thought I would capture it here.

The essential idea is that research is a way to reduce uncertainty, which then leads to 3 insights:

  • When we plan for the future we are usually guessing;
  • Guesses are necessary but they are also just the absence of research; and
  • Research then, is about changing the odds in our favour by making it more likely that our guesses are correct.

Research is good

If we want to be more confident in our guesses, then we can do some research. Doing this research will be good because it will change the odds of our current guesses being right from, say, 1 in 10 to 1 in 2.

It follows then, that research is worthwhile, if the cost of improving the odds in our favour (the effort of doing the research) is likely to be less than the cost of a bad guess (the pain caused by guessing wrong and then recovering from our mistake).

Research is of little or no value when:

  • The research is more costly than going with a wild guess and seeing what happens;
  • We will ignore the research because we have already converted a guess into a commitment and we plan to proceed regardless of what we learn in our research; or
  • We are not doing research that improves the odds of the most critical guesses being right, because we are too busy researching easy but unimportant things.

Features are also guesses

Building a new feature creates an output (the feature is released to our customers) and then we hope (guess) that the feature will create a good outcome. Usually that outcome is to create some value to the user or some extra revenue for the company.

Since we are only guessing that the features will create the right outcome, there is a risk that we are wrong. We can mitigate this risk by making it cheap to create and test new some elements of the potential new features (Using approaches like MVP, MOSCOW, Kano).

So, where it is cheaper to build features and see what happens, then we should do that. When, on the other hand, it is cheaper to do some research before inflicting experimental new features on people, then we should do some research first.

The choice we make is not about either always doing research before building sometime or always working on spitting out features without a lot of research. The choice we make is about how to do the right mix of research and building of partial solutions to most economically improve the odds of creating something valuable with our limited time.

Research must lead to learning

For research to be useful though, it must be used. For that to happen, people must take the research into account when they make decisions.

Based on that idea, the quality (and value) of research is not just related to how valid the research is, but also two other factors:

  • The willingness of decision makers to change their minds when the research changes the odds of their existing guess being right; and
  • The ability of the researchers to interpret the result and explain the insights and implications to the decision makers.

I wonder how often we do research after we have already turned our guess into a commitment. Why do any research if it will not change what we decide to do or how we will do it?

The animal farm principle

OK, so research must be both effective in changing the odds of a guess being right and also useful to the decision maker.

We are not finished yet though because there is one more element that makes research valuable.

In the book Animal Farm, there is a phrase that “All animals are equal, but some are more equal than others.”

I guess in terms of guesses, “All guesses are potentially scary, but some are a lot scarier than others. Guessing which bottle contains poison is really scary. Guessing which book might be interesting to read next is not so scary.

Good research will be of the greatest value if it is focused on the most important guesses – the ones that represent the greatest danger if they are wrong.

If we know we can build a feature but do not really know that customers will use it, researching how to build the feature seems less useful to me than researching what the customer really needs, since it is more dangerous to build the wrong features than to struggle more than expected when building them.

I wonder how often we spend time deciding how to build something rather than learning whether people will care if we build it. I wonder if that sometimes happens because it is easier to understand whether we can build something than how it will be used, so we do what is easy rather than what is needed.

Research is not just about what feature to build next

So we have some idea of WHEN to do research, but there is also the question of WHERE or ON WHAT should we do our research.

Some product and design teams report which features they have released this week or this quarter, but this is reporting not research. They might then decide to do some research on which of those features have been adopted, or liked, or hated. Maybe they will use Pirate Metrics or HEART metrics to do this. The “research” done here might therefore be data gathering, with some anecdotal input too.

Where that is the case, all the points so far in this article apply to making the research valuable.

The information we gather might tell us whether to spend more effort on new versions of the feature we released, or fixes to bad guesses we made about how the feature would work or how people would use it.

Rather than just learning about guesses we made in the past though, research is probably better focused on what guesses we might make in the future.

We should be biased toward researching what people are trying to achieve (JTBD, Pains, gains, problems) rather than just what features they are using. Understanding what people actually do when we are not watching them involves a lot of potential guesswork and changing the odds that our guesses about them are right is a big win for us.

One more thought here though. Improving our guesses about what people are asking for seems less innovative than the far riskier guesses we make. The best research would be research that improves our odds of guessing correctly what people would love to have available, even if they did not think to ask for it.

If we can improve the odds that we will make a good guess about what people don’t yet know would be really useful, then we are moving into the realm of innovation and competitive advantage.

So the biggest advantage that research might have over building and testing features cheaply is that we can gain insights that others do not yet have, but researching what is happening in customer’s experiences beyond what they are using our product for.

Research is embedded in prioritisation

So much for prioritising the right research.

The other insight was that research should strongly influence our priority of other work.

In some organisations, people use RICE to prioritise their roadmap. Research is the C part of that equation, which links directly to the question of when to do research, because “changing the odds of our guess bring right” can also be described as “increasing our confidence in our guesses” about all the other letters in the RICE anagram.

In theory, we can improve the confidence we have in our guesses through persuading each other, giving ourselves pep talks and so forth. Unfortunately doing so really just increases our hope that our guess is right, it does not actually change the odds in our favour of us being right.

So if we use something like RICE scoring, then research fits into the prioritisation process very deliberately and very neatly.

Conclusion

A guess is when we have an absence of research. Research is done when we want to improve the odds that we can rely on our guess.

Based on that, the amount of research should be a function of:

  • The improvement in the odds of us being right about a guess that the research leads to;
  • The importance of the guess that we we want to be more confident in;
  • The likelihood that we will actually act on what we learn (change our mind or move forward confidently); and
  • The effort or cost of doing the research.

Similarly, the way to measure the value of our research is to measure these same four things once the research is completed:

  • The change in the odds of being right, that the research resulted in;
  • Which of our most important guesses are better informed by the research;
  • Whether we actually acted on the research; and
  • The effort or cost of the research.

How do coaches stay interested in people?

Being a coach is about having good conversations.

Sure, there are times when you are reading documents, examining data or making observations, but the reason you are doing those things is to prepare for the conversation that you will be having with the person or team that you are coaching.

The conversations that you have are quite specific too. The topic is always the person you are talking to. It is always about them and how they make sense of the world.

Sure, you might be talking about changes in the organisation, challenges in managing stakeholders or the crushing pressure of a tight deadline, but the reason you are talking about these things is to come back to the same topic – the people you are coaching and how they can make sense of it all.

Once they can make sense of it all, they know what to do next and they know who they want to be. Once they know those things your job is to get out of the way until it is worth having a another conversation.

You better find people interesting

If that is true, then you can expect to spend a lot of time talking to people and talking exclusively about them and not yourself. But in fact you won’t even be doing the talking, you will be listening to them do the talking.

If you do not find those conversations interesting then it will suck to be a coach. You will constantly be stuck in conversations you find neither interesting nor energising, and that would really suck.

Just as bad, if you do not make these conversations count for those you speak to, then you will not be effective as a coach. If the core of coaching is having good conversations and you are not having good conversations, I guess that once again, it will suck to be a coach.

Ouch – I guess if you want to be a good coach, you better find the people you coach interesting. If not then you better find more interesting people to coach and those you were coaching better find a more useful coach.

Is coaching for me then?

I think that I am a good coach, but I am not someone who enjoys drama or long conversations. I skip over the long conversations in books when I read them. I watch tv with dialogue in it, but I get distracted quickly and start talking or fidgeting when there are long sections of dialogue. Perhaps I am unsophisticated, but that is how I am.

I wonder then – is coaching for me? Will I find it interesting if I do not enjoy endless ongoing dialogue? (spoiler alert – yes, but it took a while for me to learn exactly what it is that I love about coaching: what really makes it fun for me).

What makes coaching fun for me?

When I am at my best, I am wrapped up in solving a complex problem, unaware of the rest of the world while I remain engrossed the mystery that I am unravelling. I am probably not a good communicator as I get lost in what I am absorbed by.

When I get stressed though, I prefer to jump into action without thinking much at all. I really do not like to stop to talk a lot when I am stressed, I want to be left on my own or I want to be acting my way through the stress. I am not a “talk it out kind of guy.”

I think these are traits that have been with me since I was young – Happiest when I am fully engrossed in a problem and relieved when was stressed but I can start to move to action.

These are some of the traits that people recognise in me, but they are NOT the traits that make me a good coach:

  • One of the worst things a coach can do is to start to ponder and solve the problems that someone raises when being coached. In fact, when the coach starts doing the thinking, the coach is no longer doing the coaching.
  • Another of the worst things a coach can do is to cut the client off, in order to move to action, because the coach is getting frustrated or bored with the way the client is talking about the problems they are facing.

When I first started coaching I thought that since these traits are part of who I am, they would also be the biggest impediments I would face when coaching.

They turned out not to be the biggest challenges for me though, possibly because I was aware of them. Maybe, but I think it is probably because I can maintain greater distance when hearing about other people’s problems than I have when I am solving my own puzzle and possibly because I have a lot more more patience with the things that stress my clients out than I have with things that stress me out personally.

Either way, the biggest problem I had when I started coaching was getting stuck in what I would call “circular conversations.”

What I mean by circular conversations is the kind of discussions where we return to the same point again and again, and then run out of time for the discussion, without ever getting to either a new insight or a new committed step forward.

I found that I was interested in helping someone think for themselves and I was asking a lot of questions and even listening a lot. But I was somehow trapping my client in a circular, almost ground-hog day, discussion. At the end of half an hour I would be trying not to ask the same question and the client would be trying to answer honestly without thinking we had come to a dead end.

Actually we had come to a dead end and it was because I was missing something, not because the client was not ready to talk or because I did not want to listen.

What I was missing was, I finally learned, was a process behind my coaching. I was having a conversation but I was not stepping back to observe where the conversation was at and where it was heading because I had no map for the conversation.

I like building a process, so I always liked the idea of a coaching model. But I am not always good at following a process. I would be in the conversation, so focused on what was being said, without knowing how to move forward, because I was not able both listen and maintain a map for us of where we were.

Once I learned to listen to the other person while keeping a map in my mind, I found that the conversations were a lot more effective. I also found that they were a lot more interesting.

I resisted this at first because I was worried that using a cookie cutter process while not really being truly present to hear what the client was talking about would be transparent and un-empathetic. It wasn’t though, at least with individual coaching.

I did find it somewhat challenging with individual coaching to avoid being clunky as I switching between listening and checking my map, kind of like a driver new to using GPS.

But I found it far more challenging with group coaching, which surprised me. I found myself torn between the false dichotomy of getting people to adopt a new way of working, like Kanban, and really listening to them in order to help them find their own answers.

People told me that Shu Ha Ri is the way to go, but I worried that Shu meant telling them what to do rather than listening to their stories.

I was partly right about that. If you are more interested in the new way of working than you are in the people and their growth then you are a good scrum master, but I think you are missing a great journey as a coach. I certainly lose interest in new frameworks and chasing people to adopt new practices.

What I came to believe though, is that there is nothing wrong with Kanban, Scrum, Safe, Disciplined Agile or even Prince 2. There is nothing wrong with teaching people a new idea or a new way of working, as long as you do not mistake that for the core conversation of the coach – the listening part of coaching.

You can explain a new framework or way of working to people once you have listened to them, if it is relevant to them. You cannot really say that you are coaching if you are introducing your ideas to them and not listening to them.

So the model for coaching is not the same as the model you want them to use for working. The coaching model is about the conversation you have with people and the way of working can be change that you are keen on them using once you understand where they are coming from.

At least for me, a basic coaching model or coaching arc makes a huge difference to the conversations I have. It is separate to and more useful to me than any solution I can offer the team. Yet, ironically the coaching model that I use is not something that really holds my interest for long.

Coaching frameworks are good, but the reason I use them is to get into the conversation that I think will help my client. So there we are, back into having the conversation.

What makes coaching interesting for me, day in day out, and what makes the people interesting to me, day in day out, is the unfolding stories that come out of our coaching conversations.

What a framework does for me is to help me to listen better. This was the biggest break through for me and the biggest challenge in learning to really coach.

I had to learn to use a structured way to listen. I needed to use it to ask questions, to reflect what people were saying and most importantly to let them explore their own thinking without me derailing them.

The real way I found to stay interested in the people I coach is to really listen to understand them. Not to listen to think about how to respond but simply listen to what is being said. Once I could do that, I found that the conversations that I found myself in were endlessly interesting and often surprising.

So I guess for me it is as simple as that. The way to stay interested as a coach is simply to really listen to what people say so you can reflect it back to them and watch in wonder as they go from turmoil to clarity and inertia to action.

What I found created the most interesting coaching conversations, and the most effective, was to stop listening to try to work out how to help people and to start listening simply to understand what they were saying.

So that is now what keeps me interested in the people I coach.

If we want to teach agile, we should be agile in our teaching

I was helping someone make some improvements in their team recently, to be more “agile.”

I made the point, almost apologetically, that we had changed direction a few times and iterated in what we were doing. I started with a workshop, then some coaching, then we had some team meetings and then we seemed to succeed.

My friend replied “I guess you have to be agile when you are a coach.”

I said yes and laughed, but then I thought about it.

Sometimes people say “let’s be agile” when they mean “let’s drop any pretence of planning or process and just charge forward.” This is not really a recipe for success and nor is it really agile. It is more like taking a “bull headed approach” to success; charging forward like a bull, hoping you only collide with things that you can smash your way through.

What we did was to create a messy first draft of a plan and then put that into action. Once we started acting though, we kept the goal in mind while we reacted to what we learned in order to keep moving forward. More like a yacht tacking against the wind, than a bull charging down the road.

Unlike a yacht though, we did not just steer ourselves, rather we stopped to check with other people and started involving multiple individuals, allowing their interaction to guide us.

We did have some discipline in clarifying our goals, identifying and communicating dependencies and validating success, which helped a lot. In fact this allowed us to sense and respond to multiple new perspectives and the keep moving while we kept improving.

It was a bit messy, but it was successful because of they way we dealt with the mess to finally create some order. Anti-agile would be starting with order that results in mess, but agile is embracing the mess that exists and using it to create order.

So now I think we should say it without laughing.

If you want want to teach people to be agile, you need to be agile in the way you teach them.

Does your team get involved in solving mysteries?

I was talking to someone about a request from a client to “look into something weird.” The client was not sure if something was a problem or not, so they raised it with someone they knew in the team and a couple of hours later the case was solved.

It got me thinking. Where do these odd requests fit in?

Sometimes people do not know how a product works or how to use it, so they ask for guidance – is that “a matter for the help desk”, is it evidence of a need for client training or is it a hint that we need to improve our usability?

The requests often seem trivial, yet there is still a well hidden hint of learning for the humble development team to better understand the context in which their clients operate. It might also be a breadcrumb on a trail to learn about their “jobs to be done.”

So a strange, unexpected request could be both a chance to deliver immediate value beyond the delivery of features and to understand where there is room for improvement in the product or system we support.

It could also be a pointless seeming diversion from producing new features and improvements that are already in what could be an endless backlog of work.

Customer collaboration

I used to think of this as “ongoing collaboration” with customers, but that seems to be going out of fashion now that we have chatbots who give the appearance of emerging as new life forms that are actually more interested in talking to customers than many teams of humans are.

Well, maybe that is not entirely true, but many teams today have split the “ongoing discussion with customers” from the “building values for customers”. The people talking to customers help them with their daily confusion or needs and the builder team build new value from their own research or the requests of others in the organisation.

I guess it does not matter who is talking to customers, or even if we are making the conversations more efficient with technology. What matters is that we are learning from them as well as helping them.

1st level conversations

Helpful staff often solve customer problems. This is great, but the same staff sometimes lack the ability to capture and share how they helped and whether there is an opportunity to be pro-active in the future.

I like to think that part of the “sense and respond” in an agile team is to somehow sense what customers are experiencing and synthesise this into new solutions and remedies for old solutions that are not working so well.

Let’s assume this is happening though. Let’s assume that someone talks to customers when they contact the organisation and that the team gains some insights from this.

Sometimes though, the client asks a question that we cannot answer, even if we look it up in our big book of team knowledge.

Not only that, but similar situations arise internally.

Good testers ask annoying questions that go beyond the scope of checking if a story meets its definition of done. They discover something odd or intriguing, or they might even discover a bug or odd feature in an unrelated part of the system.

Peer reviews of code and of stories can also highlight points of curiousity, not related to the subject at hand – it is neither helping us to code the specific story we are working on, nor helping us to break down a story in our backlog.

What should happen to these requests?

The deliberate ignorance strategy

un-structured requests can involve an unknown amount of investigation. This is kind of like a detective investigating a crime before there is a clear mandate to do so.

I guess one approach to solving these mysteries is to simply ignore them so we can focus on our more concrete work and our existing commitments. This is kind of a focus on “plan and execute” rather than “sense and respond,” but for busy people this is a tempting option.

There are several well-tried approaches to clearing these mysteries out of the way. They each help us maintain our velocity, but at the cost of also maintaining our ignorance.

One trick is to just add vague things to our backlog and then move on. We can then say that an investigation or request is “on the list of things to look at”, knowing that we will not in fact ever have time to properly understand the issue. Even if it bubbles up again and comes to the top of the list, we will not understand the context that is needed to actually investigate it properly.

When I put it like that is seems like a sub-optimal approach, but I see teams doing it from time to time.

A better approach is to be honest and tell people that you are not going to prioritise the analysis of this mystery. Instead you will focus on your team goals.

Single point curiousity

A slightly different approach is to have a volunteer take on the role of investigator. This volunteer can be the scrum master, product owner, triage officer, service manager or whatever they are called.

This single person can then choose the amount of time they will spend helping unravel mysteries, versus the time that they will manage the “backlog” of things that the team is already committing to work on.

However, some mysteries cannot be solved without someone technical getting involved. That technical person needs to look at log files, look a code, consult the runes or do something that helps unravel the mystery.

Perhaps this work is called a spike then? That is what I used to call it – where spike meant “any timeboxed detective work done by the team.” We added this to our wall of work but did not put points on it, instead just committing a limited amount of time for specific people to experiment away. We did not wait until a future sprint so it usually caused our velocity to drop a bit and we had to mention the spike in our stand-ups, which we were happy with.

But the term spike seems to mean something specific to a lot of people now days – they define it as “technical work needed to remove ambiguity from a story, create estimates or create a throw-away test of a possible solution before investing too much time.”

That means that we need to first come up with a story, prioritise it and then commission a spike, by which time the trail may have gone cold and we may not be able to solve the mystery.

Maybe call it an experiment then? No again, the client is not a scientist and their is no hypothesis to test or disprove yet. We do not yet know our hypothesis.

So maybe just call it an investigation.

I am happy to commit time to investigations during a sprint and drop my velocity, but I still think there is something missing. There should be both a resolution (or failure to resolve) and a knowledge sharing to other team members. Doing this increases knowledge and future mystery solving power – but again distracts from the velocity and spring goal focus.

A team of investigators

I remember a scrum master who reported to me put in place a “shield team” to protect the rest of the team from distractions caused by the support team, the business crew and from me. Apparently my curiousity and requests for “a few minutes” had the risk of wasting quite a bit of time.

The idea was that two people would volunteer (or be volunteered) to be the shield for the sprint. They would monitor things, cop requests and help the PO with investigations. They did not do so full time but they had lower priority stories to work on that others and so they could drop them to jump into investigations.

That approach worked really well, for that team.

It did require maturity for the shield team to remember what to do:

  • Ask about the problem, knowing people did not fill out any template properly;
  • Be curious or escalate panic if it is a symptom of a crisis;
  • Commit specific time to resolve and a specific question to answer;
  • Fix something or create a workaround and proper fix plan, especially if it is a “problem” and not an “incident/one-off query”; and
  • Capture the learning to share with others.

I like the idea of having a mystery solving team. Dropping our throughput of features to have members of the team take the time to stop, smell the roses, unravel mysteries and solve problems that others did not realise were problems. This will impact the speed of delivering new features and fixing bugs though.

What approach do you think your team should take here?

Good training stands out

I recently completed three courses on Coursera, each achieving the goal I set out with.

The first course was a six sigma course that I used to refresh my knowledge on something that I am familiar with. It contained what I wanted to learn but was typical of old fashioned e-learning (and face to face learning). It was a series of lectures, made up of a series of slides, which contained useful information. Then there was extended reading and chat options to go further. I was able to absorb what I wanted but would not say it was awesome. This is how some corporate training is probably still structured – it does the job and you attend it as part of the job.

The other two were actually awesome for different reasons. They were both fit for purpose and achieved their goals with flare.

The first was the Amazon Cloud Practitioner essentials course. In theory, this one had way too much information to absorb because it covered the entire program of information that is needed to understand Amazon Cloud as a user/customer. However it tackled this challenge by doing a great job of introducing the essentials (as promised) with links to the detail. In addition:

  • The 3 presenter/trainers were extremely engaging. They were passionate, even about boring technical topics, they came across as humble and friendly and they delivered the training with the professionalism of paid actors. As an experienced trainer I was really impressed with their ability to engage with and communicate the material
  • The slick nature of the videos achieved another goal – giving the impression that AWS is clearly the way to go, without mentioning how other cloud service providers might handle the generic content
  • Statements were backed up by evidence and further reading but they were also delivered with meaningful analogies and examples to make them easy to understand.

This course is ideal if you want to absorb a lot of technical and “factual” information, such as how to prepare for the related certification in cloud practitioner-ness.

The third course was the first of 4 courses in being effective, personally and at work. It was called Success. This one covered a potentially abstract and very personal topic. It did so by taking a very different path to the other two courses:

  • The lecturer was on his own – no swapping presenter each video and he was not as smooth as the Amazon crew. However he presented the information with real credibility. I believe that he did so because
    • He was really clear on what was his (carefully considered and expertly driven) opinion and personal experience.
    • He linked to credible sources of research and evidence, while keeping it light and engaging, so it seemed like a conversation rather than a guru telling me what the correct answer was
    • He framed the lessons with guidance but did not give the answers, rather he asked the questions that caused me to really consider what I thought about the topic.
  • Each lecture provided a frame to consider a topic but did not give the answer. This suited the topic of “what do you think success is?” and “how would you achieve your goal?” really well
    • Each lecture had self assessment linked to a structured survey or set of questions.
    • Each self assessment was followed up with guidance on how to interpret and apply the results and how others might have approached it
    • There were opportunities to post thoughts and review those of others. This was a little limited because of the asynchronous nature of the course and the fact that it was both open to all and it was an old-ish course, so the answers were sometimes not well considered.
    • However the assignments involved sharing your personal thoughts and then giving others feedback on theirs. This was interesting because it meant that I was both encouraged to really consider my perspective and then surprised by the difference in the perspectives of others.
  • The material was presented simply but had room for really complex thinking if you took the time to do it. This meant that the course could be taken by someone young and inexperienced or old and very experienced. I think you could actually do this course every few years and reflect on your growth.

Both the great courses shared some concepts in that they were engaging and simple to absorb, so my cognitive energy was focused on the material and not trying to work out what was going on. Both courses also created a sense of curiousity, both during the course and for further learning after the course.

Both also had very different strengths that suited their purpose. I am not sure if the strengths would have translated as well across courses.

I learned a lot from these courses and enjoyed the journey. They also provided a great reminder that I should keep working on my own craft, to lift my game at both engaging people in learning and in honing my approach to suit a clear goal that can be achieved by those I help, in a way that creates learning, satisfaction and increases curiousity to continue learning.

The practitioners are tough acts to follow, but inspiring artisans rather than intimidating experts. Of course, the courses were also great learning in their own right.