Daily reflections for positive change

I just completed a course on positive psychology. More accurately, I just completed a a Coursera specialization made up of 5 courses on positive psychology.

It was a great course, with some great theory and a lot of meaningful practice. It was full of small things you can do to improve your life and also your coaching of others.

An exercise that I performed in the final course

One of the exercises was to create a “testable positive intervention” for myself.

In order to do that, I had to identify a bad thing that I want to improve about myself. To do this I used a list of “shadow strengths” or a list of overuse, underuse and absence of strengths.

Once I identified an area to improve, I needed to

  • Measure my current performance/happiness;
  • Do some intervening with myself; and
  • Measure my new level of performance/happiness to see if it had an impact.

The result was surprisingly good.

What did I want to improve?

My goal was to improve 2 things that related to a shadow strength of ingratiating, which is an example of overusing appreciation. I reflected on some surprising confusion about expectations and what I thought were agreed goals or actions and how these might have been related to appreciating the good without digging into things I do not appreciate, or things that need to be done.

The two goals I landed on were:

  • Set better, clearer expectations and increase accountability:
    • Set clear expectations of Myself and hold myself accountable to them;
    • Communicate these expectations better to others; and
    • Be more explicit in my expectations of others and where I might disagree with things they say, do, or plan to do.
  • Pause more when in a conversation in order to listen instead of talk.
    • Specifically to count 3 seconds sometimes after talking no more than 40 seconds.
    • I found this relevant to the first goal, because the lack of silence meant a potential lack of shared understanding of expectations.

What did the intervention look like?

A standard to aspire to

One of the practices that the course recommended was to leverage a strength to build improvements.

One of my “signature strengths” is “Genuineness, Authenticity and Honesty.” I decided to use this because, if I claim to be honest and authentic, then it should follow that I am also communicating my views clearly.

Also – I think my authentic self is a good team player, which would suggest that I can listen to others and that I can communicate authentically. If this is the case then I am not trying to become a different person with the above goals, but rather being the same person in the moment that I want to be all the time.

So now I have a positive standard to work towards – someone who has authentic conversations and sets clear expectations – me already on a good day and maybe not me when I fall short of who I want to be.

Daily measures and observations

I used a well known “3 good things today” exercise which, not surprisingly, involved reflecting on three good things that happened each day. This is a great exercise, but not surprisingly, it does not always highlight the gaps or progress I made with my gaps that I am working on. However it kept me focused on good outcomes and positive observations.

I complemented that practice with a report card for the day on my successes. This is designed to remind my of successes but also get me to focus on the situation where success was possible. On most days I had some successes (yay) and a couple of misses. This tool got me to highlight when I did not feel I was successful or I was not happy with “my involvement and my challenges.” It worked well because the expectation that I will be successful combined with concrete example from my day to see if I hit my goal.

Planning to observe and practice in the moment

Now I had a daily reflection to review my success, lack of success in adopting my improvements.

What I also needed was a way to actually observe myself in the moment so I could collect the information to reflect on. I also needed a chance to pick when to actually try to change my habits in the moment.

To do this I coopted a set of questions from the book Presence Based Coaching, which I have previously used in creating habit stories for myself and others.

I started the day with a todo list (I use a bullet journal approach) and then selected 1-2 meetings for the day when I would focus on applying my self improvement. At the beginning of each meeting I would check the following questions and try to be aware of them for the meeting:

  • In this moment what is driving my choices? (“in this moment” could mean the conversation i am having or meeting I am starting);
  • Who am I right now (or what do I see as my role here)? How would I act if that was who I am?
  • What am I actually doing?

Then at the end of the meeting I noted the answers to the same question, but restated in the past tense. Then I took a quick not of how I felt about it.

This data (set of rough notes) then gave me something to reflect on at the end of the day, when I did my accomplishments report card. I think the act of reminding myself of these questions also made me more likely to push myself to improve.

A rapid reflection cycle

So the whole cycle looked like this:

  1. Bullet list at the start of the day, with a cup of coffee and a note about a couple of meetings to focus on when practicing my better practices.
  2. Reminder before a couple of meetings to act better, with a note on how I went.
  3. Reflection at the end of the day with the accomplishment report card,
  4. A quick follow on with a list of 3 good things from the day.

Measuring the result after 2 weeks

I kept all the daily reflections in a google document so I also reflected briefly at the end of each week. This gave me a qualitative view of my progress.

There were a couple of self-reflection surveys that the course included. I scored myself on these at the beginning and end of the exercise. There was also a shift here, whether it was permanent or temporary.

Overall I noticed quite a shift, but I still have to turn things into a habit, so I guess that is my next step.

I think I will stick with these two simple goals for another couple of weeks before I improve anything else. When I do move to the next improvement I will go back to basics and design a new routine/intervention.

The approach worked really well for me. I think the use of a strength as a standard and thing to leverage worked and so did the ongoing focus of my attention.

Scrum for one? Not so sure.

I work with a team who cancelled their showcase.

Due to a combination of holidays, vampire infestations and other one off occurrences which led to only having one developer available for the sprint. The developer and the stakeholders did not want a bunch of people, all looking at the poor, single, developer and asking what he had been up to during the sprint.

I guess that makes sense, after all Scrum is a team sport. A showcase and retro with one person seems an bit over the top. In fact, even with 2-3 people the overhead of scrum seems excessive.

So I got to thinking, what would I utilize if I was on my own? Would I have a sprint? Would I appoint myself PO and Scrum Master and have a backlog that I was in charge of? Would I have a daily stand-up to share the impediments I faced with myself?

I think I would draw the line at having a meeting with the one amigo to break my stories down together, but I would probably still want to track my work.

I know people who use a Kanban wall instead of a todo list to keep on top of their work. I think I might stick with the todo list but the value of focus and transparency still counts.

While I would certainly track my work visually, I would not bother at all with any of the scrum roles. I would just be me. So work visibility is in and role definitions is gone.

Would I have a sprint? I guess it depends what kind of work I am doing.

When I do creative work I often use the Pomodoro Technique, which is essentially 25 minute sprints with 5 minute breaks and then going for a walk or getting a bit to eat after a couple of hours. That is kind of sprinting but it is not a feedback cycle outside the single pieces of work.

But maybe a weekly goal setting session is a good idea.

Actually, I have found a lot of success with WOOP based goal setting for big pieces of work. That is where I set an optimistic goal (a wish) and then imagine how good it will feel to succeed (the outcome). Then I imagine the impediments that will stop me achieving the goal (the obstacles) and finally plan what I could do if the obstacles occur (plan).

So I would start my week with a goal and with a plan to respond to my inevitable distractions.

When I put it like that I wonder why some Scrum teams begin the sprint with a goal like “we will complete the list of stories” when they might be better off agreeing a goal, imagining success and then predicting and planning for the likely threats to success. Anyway, for my one person team I will focus on having a goal that is more than just completed todo items.

But I don’t have a formal planning meeting. I can replace that with a cup of coffee and some goal setting, followed by a walk or a snack.

Is that really enough though?

Maybe sometimes I should do some continuous discovery as well. Instead of just assuming I know what people want, maybe I can stroll over to talk to them and ask some questions about what they want.

I can take my whole team with me, since it is only me. When I have some work done I can also take it with me to show my customer in another visit. Or I can skip that if I am just bashing through some work.

Now I can spend the rest of the day getting on with my work.

Next day though, it is probably time for my first stand-up. Or more likely my sit down with a coffee.

I will get out my todo list, tick some things off (or slide my post-it notes across my desk) and then confirm what I plan to do for that day. Then I will go for a walk or get a second coffee, before setting up my Pomodoro timer and getting stuck into the work again.

But wait, maybe I should have a definition of done or some acceptance tests. I don’t think it will take a lot of debate with myself, but for each thing I plan to do I should know what standard to achieve and what success looks like.

Being the entire team of the one amigo though, I think I will sometimes say that I am starting a piece of work without a clear conclusion. I won’t call it a spike or and MVP, I’ll just call it something I want to do. If that is the case then I will define success for my sprint (the outcome in WOOP) and then decide which other things need a definition of done. Nobody is watching me so I will create a clear outcome when it is worth testing against and an open outcome when I am exploring new ideas.

After several daily sit-downs and a bit of work, I will reach the end of the week. Should I reflect on what I have done?

Actually I do think that is valuable. I will run through what got done, what didn’t and what isn’t really working for me.

I also find it valuable to remind myself of what I have accomplished and maybe where I stuffed up (fell short of my expectations of myself and my goals); which goes beyond just reviewing what I have crossed off todo list.

So I will do that. I will call it my weekly reflection with a glass of wine or sometimes just a pen and paper. The documentation will either be nothing or a work journal with notes in it. Probably the latter if I only have one glass of wine :).

My artefacts are now a todo list and a journal.

So I have ditched the roles and the mystique and I am left with:

  • A goal for the week (or whatever stroll length I choose);
  • An expectation that I will encounter impediments and a plan for some of them if they happen;
  • A way to make the work visible – todo list, Kanban board or whatever;
  • A daily sit down with a coffee and a review of my todo list;
  • A definition of done for some things;
  • Potentially a visit (or zoom chat) to someone I am delivering the work to; and
  • A reflection on how my week went.

That doesn’t sound too far off Scrum, if I forget about eating chickens and eggs or having people walk around calling themselves master or owner or things like that.

If that works for 1 person though, should it work for 2? If it works for 2, should it work for 3? At what point would I actually move from strolls to sprints and sitting down to standing up?

Can I have a team of 5 who set a weekly goal, stroll over to visit people when they want some input and sit down for tea or coffee each week?

When should I start using a burn down chart, a cycle time average or a scrum master? When should I use formal ceremonies? Is it just to do with the number 7?

In other words, is my decision to use Scrum vs Stroll-approach based on the the number of people in the team; where a single stroller works alone, but somewhere around 5 you need more processes and artefacts?

I don’t think it is just about numbers though, there must be a lot of other factors.

I work differently when building a course to when catching up for 1-1 coaching. So I would adopt my stroll framework a little depending on the kind of work I was doing. I also work differently when pairing with some people to when I pair with others.

I wonder now, what factors beyond the number of people I have in my team, should lead me to adopt a different path to creating my way of working?

Guessing is the absence of research

I saw a refreshing take on research recently and I thought I would capture it here.

The essential idea is that research is a way to reduce uncertainty, which then leads to 3 insights:

  • When we plan for the future we are usually guessing;
  • Guesses are necessary but they are also just the absence of research; and
  • Research then, is about changing the odds in our favour by making it more likely that our guesses are correct.

Research is good

If we want to be more confident in our guesses, then we can do some research. Doing this research will be good because it will change the odds of our current guesses being right from, say, 1 in 10 to 1 in 2.

It follows then, that research is worthwhile, if the cost of improving the odds in our favour (the effort of doing the research) is likely to be less than the cost of a bad guess (the pain caused by guessing wrong and then recovering from our mistake).

Research is of little or no value when:

  • The research is more costly than going with a wild guess and seeing what happens;
  • We will ignore the research because we have already converted a guess into a commitment and we plan to proceed regardless of what we learn in our research; or
  • We are not doing research that improves the odds of the most critical guesses being right, because we are too busy researching easy but unimportant things.

Features are also guesses

Building a new feature creates an output (the feature is released to our customers) and then we hope (guess) that the feature will create a good outcome. Usually that outcome is to create some value to the user or some extra revenue for the company.

Since we are only guessing that the features will create the right outcome, there is a risk that we are wrong. We can mitigate this risk by making it cheap to create and test new some elements of the potential new features (Using approaches like MVP, MOSCOW, Kano).

So, where it is cheaper to build features and see what happens, then we should do that. When, on the other hand, it is cheaper to do some research before inflicting experimental new features on people, then we should do some research first.

The choice we make is not about either always doing research before building sometime or always working on spitting out features without a lot of research. The choice we make is about how to do the right mix of research and building of partial solutions to most economically improve the odds of creating something valuable with our limited time.

Research must lead to learning

For research to be useful though, it must be used. For that to happen, people must take the research into account when they make decisions.

Based on that idea, the quality (and value) of research is not just related to how valid the research is, but also two other factors:

  • The willingness of decision makers to change their minds when the research changes the odds of their existing guess being right; and
  • The ability of the researchers to interpret the result and explain the insights and implications to the decision makers.

I wonder how often we do research after we have already turned our guess into a commitment. Why do any research if it will not change what we decide to do or how we will do it?

The animal farm principle

OK, so research must be both effective in changing the odds of a guess being right and also useful to the decision maker.

We are not finished yet though because there is one more element that makes research valuable.

In the book Animal Farm, there is a phrase that “All animals are equal, but some are more equal than others.”

I guess in terms of guesses, “All guesses are potentially scary, but some are a lot scarier than others. Guessing which bottle contains poison is really scary. Guessing which book might be interesting to read next is not so scary.

Good research will be of the greatest value if it is focused on the most important guesses – the ones that represent the greatest danger if they are wrong.

If we know we can build a feature but do not really know that customers will use it, researching how to build the feature seems less useful to me than researching what the customer really needs, since it is more dangerous to build the wrong features than to struggle more than expected when building them.

I wonder how often we spend time deciding how to build something rather than learning whether people will care if we build it. I wonder if that sometimes happens because it is easier to understand whether we can build something than how it will be used, so we do what is easy rather than what is needed.

Research is not just about what feature to build next

So we have some idea of WHEN to do research, but there is also the question of WHERE or ON WHAT should we do our research.

Some product and design teams report which features they have released this week or this quarter, but this is reporting not research. They might then decide to do some research on which of those features have been adopted, or liked, or hated. Maybe they will use Pirate Metrics or HEART metrics to do this. The “research” done here might therefore be data gathering, with some anecdotal input too.

Where that is the case, all the points so far in this article apply to making the research valuable.

The information we gather might tell us whether to spend more effort on new versions of the feature we released, or fixes to bad guesses we made about how the feature would work or how people would use it.

Rather than just learning about guesses we made in the past though, research is probably better focused on what guesses we might make in the future.

We should be biased toward researching what people are trying to achieve (JTBD, Pains, gains, problems) rather than just what features they are using. Understanding what people actually do when we are not watching them involves a lot of potential guesswork and changing the odds that our guesses about them are right is a big win for us.

One more thought here though. Improving our guesses about what people are asking for seems less innovative than the far riskier guesses we make. The best research would be research that improves our odds of guessing correctly what people would love to have available, even if they did not think to ask for it.

If we can improve the odds that we will make a good guess about what people don’t yet know would be really useful, then we are moving into the realm of innovation and competitive advantage.

So the biggest advantage that research might have over building and testing features cheaply is that we can gain insights that others do not yet have, but researching what is happening in customer’s experiences beyond what they are using our product for.

Research is embedded in prioritisation

So much for prioritising the right research.

The other insight was that research should strongly influence our priority of other work.

In some organisations, people use RICE to prioritise their roadmap. Research is the C part of that equation, which links directly to the question of when to do research, because “changing the odds of our guess bring right” can also be described as “increasing our confidence in our guesses” about all the other letters in the RICE anagram.

In theory, we can improve the confidence we have in our guesses through persuading each other, giving ourselves pep talks and so forth. Unfortunately doing so really just increases our hope that our guess is right, it does not actually change the odds in our favour of us being right.

So if we use something like RICE scoring, then research fits into the prioritisation process very deliberately and very neatly.

Conclusion

A guess is when we have an absence of research. Research is done when we want to improve the odds that we can rely on our guess.

Based on that, the amount of research should be a function of:

  • The improvement in the odds of us being right about a guess that the research leads to;
  • The importance of the guess that we we want to be more confident in;
  • The likelihood that we will actually act on what we learn (change our mind or move forward confidently); and
  • The effort or cost of doing the research.

Similarly, the way to measure the value of our research is to measure these same four things once the research is completed:

  • The change in the odds of being right, that the research resulted in;
  • Which of our most important guesses are better informed by the research;
  • Whether we actually acted on the research; and
  • The effort or cost of the research.

How do coaches stay interested in people?

Being a coach is about having good conversations.

Sure, there are times when you are reading documents, examining data or making observations, but the reason you are doing those things is to prepare for the conversation that you will be having with the person or team that you are coaching.

The conversations that you have are quite specific too. The topic is always the person you are talking to. It is always about them and how they make sense of the world.

Sure, you might be talking about changes in the organisation, challenges in managing stakeholders or the crushing pressure of a tight deadline, but the reason you are talking about these things is to come back to the same topic – the people you are coaching and how they can make sense of it all.

Once they can make sense of it all, they know what to do next and they know who they want to be. Once they know those things your job is to get out of the way until it is worth having a another conversation.

You better find people interesting

If that is true, then you can expect to spend a lot of time talking to people and talking exclusively about them and not yourself. But in fact you won’t even be doing the talking, you will be listening to them do the talking.

If you do not find those conversations interesting then it will suck to be a coach. You will constantly be stuck in conversations you find neither interesting nor energising, and that would really suck.

Just as bad, if you do not make these conversations count for those you speak to, then you will not be effective as a coach. If the core of coaching is having good conversations and you are not having good conversations, I guess that once again, it will suck to be a coach.

Ouch – I guess if you want to be a good coach, you better find the people you coach interesting. If not then you better find more interesting people to coach and those you were coaching better find a more useful coach.

Is coaching for me then?

I think that I am a good coach, but I am not someone who enjoys drama or long conversations. I skip over the long conversations in books when I read them. I watch tv with dialogue in it, but I get distracted quickly and start talking or fidgeting when there are long sections of dialogue. Perhaps I am unsophisticated, but that is how I am.

I wonder then – is coaching for me? Will I find it interesting if I do not enjoy endless ongoing dialogue? (spoiler alert – yes, but it took a while for me to learn exactly what it is that I love about coaching: what really makes it fun for me).

What makes coaching fun for me?

When I am at my best, I am wrapped up in solving a complex problem, unaware of the rest of the world while I remain engrossed the mystery that I am unravelling. I am probably not a good communicator as I get lost in what I am absorbed by.

When I get stressed though, I prefer to jump into action without thinking much at all. I really do not like to stop to talk a lot when I am stressed, I want to be left on my own or I want to be acting my way through the stress. I am not a “talk it out kind of guy.”

I think these are traits that have been with me since I was young – Happiest when I am fully engrossed in a problem and relieved when was stressed but I can start to move to action.

These are some of the traits that people recognise in me, but they are NOT the traits that make me a good coach:

  • One of the worst things a coach can do is to start to ponder and solve the problems that someone raises when being coached. In fact, when the coach starts doing the thinking, the coach is no longer doing the coaching.
  • Another of the worst things a coach can do is to cut the client off, in order to move to action, because the coach is getting frustrated or bored with the way the client is talking about the problems they are facing.

When I first started coaching I thought that since these traits are part of who I am, they would also be the biggest impediments I would face when coaching.

They turned out not to be the biggest challenges for me though, possibly because I was aware of them. Maybe, but I think it is probably because I can maintain greater distance when hearing about other people’s problems than I have when I am solving my own puzzle and possibly because I have a lot more more patience with the things that stress my clients out than I have with things that stress me out personally.

Either way, the biggest problem I had when I started coaching was getting stuck in what I would call “circular conversations.”

What I mean by circular conversations is the kind of discussions where we return to the same point again and again, and then run out of time for the discussion, without ever getting to either a new insight or a new committed step forward.

I found that I was interested in helping someone think for themselves and I was asking a lot of questions and even listening a lot. But I was somehow trapping my client in a circular, almost ground-hog day, discussion. At the end of half an hour I would be trying not to ask the same question and the client would be trying to answer honestly without thinking we had come to a dead end.

Actually we had come to a dead end and it was because I was missing something, not because the client was not ready to talk or because I did not want to listen.

What I was missing was, I finally learned, was a process behind my coaching. I was having a conversation but I was not stepping back to observe where the conversation was at and where it was heading because I had no map for the conversation.

I like building a process, so I always liked the idea of a coaching model. But I am not always good at following a process. I would be in the conversation, so focused on what was being said, without knowing how to move forward, because I was not able both listen and maintain a map for us of where we were.

Once I learned to listen to the other person while keeping a map in my mind, I found that the conversations were a lot more effective. I also found that they were a lot more interesting.

I resisted this at first because I was worried that using a cookie cutter process while not really being truly present to hear what the client was talking about would be transparent and un-empathetic. It wasn’t though, at least with individual coaching.

I did find it somewhat challenging with individual coaching to avoid being clunky as I switching between listening and checking my map, kind of like a driver new to using GPS.

But I found it far more challenging with group coaching, which surprised me. I found myself torn between the false dichotomy of getting people to adopt a new way of working, like Kanban, and really listening to them in order to help them find their own answers.

People told me that Shu Ha Ri is the way to go, but I worried that Shu meant telling them what to do rather than listening to their stories.

I was partly right about that. If you are more interested in the new way of working than you are in the people and their growth then you are a good scrum master, but I think you are missing a great journey as a coach. I certainly lose interest in new frameworks and chasing people to adopt new practices.

What I came to believe though, is that there is nothing wrong with Kanban, Scrum, Safe, Disciplined Agile or even Prince 2. There is nothing wrong with teaching people a new idea or a new way of working, as long as you do not mistake that for the core conversation of the coach – the listening part of coaching.

You can explain a new framework or way of working to people once you have listened to them, if it is relevant to them. You cannot really say that you are coaching if you are introducing your ideas to them and not listening to them.

So the model for coaching is not the same as the model you want them to use for working. The coaching model is about the conversation you have with people and the way of working can be change that you are keen on them using once you understand where they are coming from.

At least for me, a basic coaching model or coaching arc makes a huge difference to the conversations I have. It is separate to and more useful to me than any solution I can offer the team. Yet, ironically the coaching model that I use is not something that really holds my interest for long.

Coaching frameworks are good, but the reason I use them is to get into the conversation that I think will help my client. So there we are, back into having the conversation.

What makes coaching interesting for me, day in day out, and what makes the people interesting to me, day in day out, is the unfolding stories that come out of our coaching conversations.

What a framework does for me is to help me to listen better. This was the biggest break through for me and the biggest challenge in learning to really coach.

I had to learn to use a structured way to listen. I needed to use it to ask questions, to reflect what people were saying and most importantly to let them explore their own thinking without me derailing them.

The real way I found to stay interested in the people I coach is to really listen to understand them. Not to listen to think about how to respond but simply listen to what is being said. Once I could do that, I found that the conversations that I found myself in were endlessly interesting and often surprising.

So I guess for me it is as simple as that. The way to stay interested as a coach is simply to really listen to what people say so you can reflect it back to them and watch in wonder as they go from turmoil to clarity and inertia to action.

What I found created the most interesting coaching conversations, and the most effective, was to stop listening to try to work out how to help people and to start listening simply to understand what they were saying.

So that is now what keeps me interested in the people I coach.

If we want to teach agile, we should be agile in our teaching

I was helping someone make some improvements in their team recently, to be more “agile.”

I made the point, almost apologetically, that we had changed direction a few times and iterated in what we were doing. I started with a workshop, then some coaching, then we had some team meetings and then we seemed to succeed.

My friend replied “I guess you have to be agile when you are a coach.”

I said yes and laughed, but then I thought about it.

Sometimes people say “let’s be agile” when they mean “let’s drop any pretence of planning or process and just charge forward.” This is not really a recipe for success and nor is it really agile. It is more like taking a “bull headed approach” to success; charging forward like a bull, hoping you only collide with things that you can smash your way through.

What we did was to create a messy first draft of a plan and then put that into action. Once we started acting though, we kept the goal in mind while we reacted to what we learned in order to keep moving forward. More like a yacht tacking against the wind, than a bull charging down the road.

Unlike a yacht though, we did not just steer ourselves, rather we stopped to check with other people and started involving multiple individuals, allowing their interaction to guide us.

We did have some discipline in clarifying our goals, identifying and communicating dependencies and validating success, which helped a lot. In fact this allowed us to sense and respond to multiple new perspectives and the keep moving while we kept improving.

It was a bit messy, but it was successful because of they way we dealt with the mess to finally create some order. Anti-agile would be starting with order that results in mess, but agile is embracing the mess that exists and using it to create order.

So now I think we should say it without laughing.

If you want want to teach people to be agile, you need to be agile in the way you teach them.

Does your team get involved in solving mysteries?

I was talking to someone about a request from a client to “look into something weird.” The client was not sure if something was a problem or not, so they raised it with someone they knew in the team and a couple of hours later the case was solved.

It got me thinking. Where do these odd requests fit in?

Sometimes people do not know how a product works or how to use it, so they ask for guidance – is that “a matter for the help desk”, is it evidence of a need for client training or is it a hint that we need to improve our usability?

The requests often seem trivial, yet there is still a well hidden hint of learning for the humble development team to better understand the context in which their clients operate. It might also be a breadcrumb on a trail to learn about their “jobs to be done.”

So a strange, unexpected request could be both a chance to deliver immediate value beyond the delivery of features and to understand where there is room for improvement in the product or system we support.

It could also be a pointless seeming diversion from producing new features and improvements that are already in what could be an endless backlog of work.

Customer collaboration

I used to think of this as “ongoing collaboration” with customers, but that seems to be going out of fashion now that we have chatbots who give the appearance of emerging as new life forms that are actually more interested in talking to customers than many teams of humans are.

Well, maybe that is not entirely true, but many teams today have split the “ongoing discussion with customers” from the “building values for customers”. The people talking to customers help them with their daily confusion or needs and the builder team build new value from their own research or the requests of others in the organisation.

I guess it does not matter who is talking to customers, or even if we are making the conversations more efficient with technology. What matters is that we are learning from them as well as helping them.

1st level conversations

Helpful staff often solve customer problems. This is great, but the same staff sometimes lack the ability to capture and share how they helped and whether there is an opportunity to be pro-active in the future.

I like to think that part of the “sense and respond” in an agile team is to somehow sense what customers are experiencing and synthesise this into new solutions and remedies for old solutions that are not working so well.

Let’s assume this is happening though. Let’s assume that someone talks to customers when they contact the organisation and that the team gains some insights from this.

Sometimes though, the client asks a question that we cannot answer, even if we look it up in our big book of team knowledge.

Not only that, but similar situations arise internally.

Good testers ask annoying questions that go beyond the scope of checking if a story meets its definition of done. They discover something odd or intriguing, or they might even discover a bug or odd feature in an unrelated part of the system.

Peer reviews of code and of stories can also highlight points of curiousity, not related to the subject at hand – it is neither helping us to code the specific story we are working on, nor helping us to break down a story in our backlog.

What should happen to these requests?

The deliberate ignorance strategy

un-structured requests can involve an unknown amount of investigation. This is kind of like a detective investigating a crime before there is a clear mandate to do so.

I guess one approach to solving these mysteries is to simply ignore them so we can focus on our more concrete work and our existing commitments. This is kind of a focus on “plan and execute” rather than “sense and respond,” but for busy people this is a tempting option.

There are several well-tried approaches to clearing these mysteries out of the way. They each help us maintain our velocity, but at the cost of also maintaining our ignorance.

One trick is to just add vague things to our backlog and then move on. We can then say that an investigation or request is “on the list of things to look at”, knowing that we will not in fact ever have time to properly understand the issue. Even if it bubbles up again and comes to the top of the list, we will not understand the context that is needed to actually investigate it properly.

When I put it like that is seems like a sub-optimal approach, but I see teams doing it from time to time.

A better approach is to be honest and tell people that you are not going to prioritise the analysis of this mystery. Instead you will focus on your team goals.

Single point curiousity

A slightly different approach is to have a volunteer take on the role of investigator. This volunteer can be the scrum master, product owner, triage officer, service manager or whatever they are called.

This single person can then choose the amount of time they will spend helping unravel mysteries, versus the time that they will manage the “backlog” of things that the team is already committing to work on.

However, some mysteries cannot be solved without someone technical getting involved. That technical person needs to look at log files, look a code, consult the runes or do something that helps unravel the mystery.

Perhaps this work is called a spike then? That is what I used to call it – where spike meant “any timeboxed detective work done by the team.” We added this to our wall of work but did not put points on it, instead just committing a limited amount of time for specific people to experiment away. We did not wait until a future sprint so it usually caused our velocity to drop a bit and we had to mention the spike in our stand-ups, which we were happy with.

But the term spike seems to mean something specific to a lot of people now days – they define it as “technical work needed to remove ambiguity from a story, create estimates or create a throw-away test of a possible solution before investing too much time.”

That means that we need to first come up with a story, prioritise it and then commission a spike, by which time the trail may have gone cold and we may not be able to solve the mystery.

Maybe call it an experiment then? No again, the client is not a scientist and their is no hypothesis to test or disprove yet. We do not yet know our hypothesis.

So maybe just call it an investigation.

I am happy to commit time to investigations during a sprint and drop my velocity, but I still think there is something missing. There should be both a resolution (or failure to resolve) and a knowledge sharing to other team members. Doing this increases knowledge and future mystery solving power – but again distracts from the velocity and spring goal focus.

A team of investigators

I remember a scrum master who reported to me put in place a “shield team” to protect the rest of the team from distractions caused by the support team, the business crew and from me. Apparently my curiousity and requests for “a few minutes” had the risk of wasting quite a bit of time.

The idea was that two people would volunteer (or be volunteered) to be the shield for the sprint. They would monitor things, cop requests and help the PO with investigations. They did not do so full time but they had lower priority stories to work on that others and so they could drop them to jump into investigations.

That approach worked really well, for that team.

It did require maturity for the shield team to remember what to do:

  • Ask about the problem, knowing people did not fill out any template properly;
  • Be curious or escalate panic if it is a symptom of a crisis;
  • Commit specific time to resolve and a specific question to answer;
  • Fix something or create a workaround and proper fix plan, especially if it is a “problem” and not an “incident/one-off query”; and
  • Capture the learning to share with others.

I like the idea of having a mystery solving team. Dropping our throughput of features to have members of the team take the time to stop, smell the roses, unravel mysteries and solve problems that others did not realise were problems. This will impact the speed of delivering new features and fixing bugs though.

What approach do you think your team should take here?

Good training stands out

I recently completed three courses on Coursera, each achieving the goal I set out with.

The first course was a six sigma course that I used to refresh my knowledge on something that I am familiar with. It contained what I wanted to learn but was typical of old fashioned e-learning (and face to face learning). It was a series of lectures, made up of a series of slides, which contained useful information. Then there was extended reading and chat options to go further. I was able to absorb what I wanted but would not say it was awesome. This is how some corporate training is probably still structured – it does the job and you attend it as part of the job.

The other two were actually awesome for different reasons. They were both fit for purpose and achieved their goals with flare.

The first was the Amazon Cloud Practitioner essentials course. In theory, this one had way too much information to absorb because it covered the entire program of information that is needed to understand Amazon Cloud as a user/customer. However it tackled this challenge by doing a great job of introducing the essentials (as promised) with links to the detail. In addition:

  • The 3 presenter/trainers were extremely engaging. They were passionate, even about boring technical topics, they came across as humble and friendly and they delivered the training with the professionalism of paid actors. As an experienced trainer I was really impressed with their ability to engage with and communicate the material
  • The slick nature of the videos achieved another goal – giving the impression that AWS is clearly the way to go, without mentioning how other cloud service providers might handle the generic content
  • Statements were backed up by evidence and further reading but they were also delivered with meaningful analogies and examples to make them easy to understand.

This course is ideal if you want to absorb a lot of technical and “factual” information, such as how to prepare for the related certification in cloud practitioner-ness.

The third course was the first of 4 courses in being effective, personally and at work. It was called Success. This one covered a potentially abstract and very personal topic. It did so by taking a very different path to the other two courses:

  • The lecturer was on his own – no swapping presenter each video and he was not as smooth as the Amazon crew. However he presented the information with real credibility. I believe that he did so because
    • He was really clear on what was his (carefully considered and expertly driven) opinion and personal experience.
    • He linked to credible sources of research and evidence, while keeping it light and engaging, so it seemed like a conversation rather than a guru telling me what the correct answer was
    • He framed the lessons with guidance but did not give the answers, rather he asked the questions that caused me to really consider what I thought about the topic.
  • Each lecture provided a frame to consider a topic but did not give the answer. This suited the topic of “what do you think success is?” and “how would you achieve your goal?” really well
    • Each lecture had self assessment linked to a structured survey or set of questions.
    • Each self assessment was followed up with guidance on how to interpret and apply the results and how others might have approached it
    • There were opportunities to post thoughts and review those of others. This was a little limited because of the asynchronous nature of the course and the fact that it was both open to all and it was an old-ish course, so the answers were sometimes not well considered.
    • However the assignments involved sharing your personal thoughts and then giving others feedback on theirs. This was interesting because it meant that I was both encouraged to really consider my perspective and then surprised by the difference in the perspectives of others.
  • The material was presented simply but had room for really complex thinking if you took the time to do it. This meant that the course could be taken by someone young and inexperienced or old and very experienced. I think you could actually do this course every few years and reflect on your growth.

Both the great courses shared some concepts in that they were engaging and simple to absorb, so my cognitive energy was focused on the material and not trying to work out what was going on. Both courses also created a sense of curiousity, both during the course and for further learning after the course.

Both also had very different strengths that suited their purpose. I am not sure if the strengths would have translated as well across courses.

I learned a lot from these courses and enjoyed the journey. They also provided a great reminder that I should keep working on my own craft, to lift my game at both engaging people in learning and in honing my approach to suit a clear goal that can be achieved by those I help, in a way that creates learning, satisfaction and increases curiousity to continue learning.

The practitioners are tough acts to follow, but inspiring artisans rather than intimidating experts. Of course, the courses were also great learning in their own right.

The variable nature of comparisons

I have done many personality profiles over the years. Sometimes they seem to contradict and sometimes there are consistencies. One thing that is consistent though is that I see myself as more co-operative than competitive.

However, I was doing a course recently and a point of discussion was when we are all more competitive or collaborative. There were many great points around the impact of things like scarcity, social habits of humans and changing environments, all leading to shifts in behaviour.

A conclusion I could come to is that sometimes competition can lead to better results and sometimes co-operation will come up on top. I guess that is not a surprise, but it is also something I have not been paying attention to at work. I wonder if I should take not of the shifting sands of our work and the shift in behaviour.

Even more interesting though, was the impact of comparisons. Apparently comparing yourself to others can lead to a greater drive to compete, not just against them but against yourself.

If you see someone similar to you (same team, same start date at work, same school background) and they do well then you can feel a drive to push yourself to be more successful. On the downside you can also feel a drive to feel bad or to not want to cooperate to make them more successful unless you keep pace. I wonder now what tips the balance and I think it might be something to do with shared goals and interdependence for success rather than old school rivalry.

So we are likely to work together when we see a joint victory. But then there is another point around “justice.” If we see someone who started at the company a year after us get a promotion or we see someone junior get a pay-rise to earn nearly as much as us, this can create resentment even though they did nothing wrong and we are no worse off than yesterday. This means that comparing ourselves to others can cause use to be miserable rather than happy.

On the other hand, we can also become dissatisfied and then stop being complacent, change jobs, drive ourselves to overcome barriers and become more successful. So maybe being annoyed at something will cause us to change our behaviours in a positive way.

It all gets a bit confusing. I always tell people to run their own race and ignore the race others are running, but maybe that is my innate love of cooperating and relative blind spot for the benefits of competition. Maybe there are times we should compare ourselves to others, if it leads to motivation. A sense of rivalry drives some sports people to perform better when competing with rivals (even friends) and maybe the same happens in some sales teams or some classes where people push themselves to learn. But does it ever happen in a product or development team? I think people expect you to carry your weight, but I am not sure if developers, designers, testers and product people are as motivated by competition to do better work.

I wonder to what extent it is my own world view that suggests product teams are more co-operative that competitive and to what extent it is the nature of the work and the team that creates that view.

But it is not just a comparison to others that changes behaviour. We can also compare ourselves to ourselves.

We are who we are but maybe not who we expected to be – Did we perform the way we hoped to? Did I behave the way I expect myself to in that confrontation?

We are also different to who we were a year ago – does the team get more work done that it used to? Do we work more cohesively now, since we made some changes?

I think there are times when we can gain some real focus from comparisons to our expectations or our previous performance. However, once again, we can also experience a down side. Comparisons can lead us to be complacent (happy but achieving less) if we happen to have done better than expected or miserable and maybe even checking out if the opportunity was not what we expected.

I do not normally make many comparisons during my work week, preferring to be in the moment or focused on the next step. I also like to remain curious and open to possibility rather than trying compare where I am to where I thought I should be. But for the next week I will pay attention to when comparisons might be being made and when/if they are helpful or distracting.

To what extent does making comparisons (you to others, the team to a benchmark, you to your expectations of the best version of you) help you do better and be happier? To what extent do these comparisons distract you or create unnecessary stress.

The curse of knowledge

Early in my coaching career I sometimes felt like Cassandra, from Troy, who would see impending disaster and tell people, only to be ignored and then see the disaster unfold. I would say things like “if you leave testing to the end you will miss your deadline,” or ” If your try to estimate your work, you will improve even if nobody else sees the estimate.” Then teams would be too busy and stressed and not gain from my, rather obvious, insights even when they said they agreed with me at the time I gave the insight.

Early in my management career, I had a seemingly different experience, that I now believe had the same root cause. I managed some high performers, I delegated challenging work to them and I trusted them to get on with the job.

They were pretty awesome and our team thrived almost by default, so I was shocked to discover one day when talking to a couple of them that they did not realise they were high performers and that they sometimes felt I did not really care if they succeeded.

How could they miss my obvious confidence in them when I spoke highly of them, trusted them with key initiatives and called on their opinion.

I now think both of these situations were examples of me suffering from the curse of knowledge.

This is “a cognitive bias” where:

  • I assume that others have the same background knowledge that I have. For example, that estimating is best done as a statistical exercise and not an analysis based on causal reasoning; or
  • I assume because I see something, that others see the same thing the same way. For example that a leader would only delegate high profile, challenging assignments to high performers; or
  • I forget what it is like to not know something or to struggle with learning something. For example the stress of trying to test, reconcile and build when there are tight deadlines while learning to really understand critical points of failure, which is so much harder in practice than simply “testing as you go.”

Perhaps others would think that it is my arrogance (assuming people should know things) or my lack of empathy (understanding what it is like to be new to an idea).

However I don’t really think that I am arrogant or lacking in empathy and if you know different then, as obvious as it seems to you, I do not realise it. I think I just get caught up in my own assumptions and move too quickly to see the mounting evidence that others are not along for the ride with me.

The way to remove this curse, I have learned time and again, is easy but hard at the same time. It is easy because there are simple steps to remove it and it is hard because I need to reflect, focus and remember to apply the steps.

The first is to actually listen to understand what people are saying. This seems obvious but sometimes I still listen to think about how to respond, opening the door to my assumption that others are on the same page as me. Since I had the right response I can assume (incorrectly) that we had the right conversation.

This works better if I add a second step – asking clarifying questions (checking for understanding and basic facts) before getting into deeply probing questions or moving on.

The third step is to predict what will happen. Maybe look at multiple things that should happen if my assumptions are correct and if people are acting on them. I can ask “What might happen if this is true?” and “What should not happen?” Then when things happen instead of asking “how could they have missed that?” or “does that prove my hypothesis?”, I can ask “What would drive that?” and “What else could explain that?”

In short the cure is to remain curious rather than creating and testing a hypothesis. This is something that it too me a while to conclude, because testing hypotheses is such a good approach to so many things.

I am not sure if you agree with my final statement, because your might have a different “background knowledge and experience” to me. Certainly my experience of testing hypotheses is that, done well, it is a low cost way of learning, but sometimes I find that learning is lost on others, not because they were not there, but because they are processing the information differently to me, or they know something I do not, that is impacting their judgement.

You will struggle to convince me that the curse of knowledge does not exist or that it has a strong impact on coaching, but as for the cure to the curse, you might be able to convince me that there are other approaches.

Am I right? Is natural curiousity the way to beat the curse? Or is there a different answer?

However if I am testing my hypothesis there is a strong risk that, regardless of how clear I think it is, others are not so clear. The curse of knowledge is assuming or believing that they should be clear, while the “removal of the curse” is to observe and learn without pre-judgement.

My take on Mindset Tax in coaching

I recently wrote about “coaching tax” and suggested that we should focus on making sure we optimise our “time on task” when coaching, but I got the idea for a coaching tax from the concept of the “Mindset Tax.”

In this article I want to look at the difference between a mindset tax (the time spent not being able to grow) and a thinking trap (being trapped in your own unhelpful story or thinking pattern). Both are relevant to coaching and it helps to be aware of them.

Defining the term “mindset tax”

Mindset tax is the wasted time and frustration that is spent not learning from feedback

When coaching, we want people to gain new insights or commit to new actions. We want them to ask themselves great questions, see through their biases and accept feedback in a way that allows them to grow. This is not always what happens at the start of a coaching conversation though.

Sometimes people ask for feedback but really they want validation or they want to complain about something. A coach can validate people and also listen to people unleashing their frustrations in a safe environment. This is not really leading to growth, so it is not really the goal of coaching. However it may be a necessary step to allow someone to be ready to deal with something in a way that leads to growth. Hence we could say it is a “tax” or loss of thinking power on the way to growing.

In short, mindset tax is the resistance that someone has to learning from experience, feedback or coaching. It is the mental effort spent denying or stressing about feedback rather than learning from it.

In more detail

Coaches sometimes talk about having a growth mindset, or an agile mindset or an open mind. When we say things like this we mean that people should be open to the opportunity to grow and that they see feedback as a source of learning rather than a judgement about their ability or character.

Importantly though, nobody has a perfect growth mindset at all times around all issues. We all have a growth mindset in some areas, where we are open to feedback, love to be challenged and are willing to spend time and effort to improve. We also have other areas where we are not ready to receive feedback (having a closed mind or defensive attitude) or where we either believe that we are naturally good at something or naturally bad.

I know that I love doing jigsaw puzzles even though they frustrate me. Every breakthrough brings me closer to my goal (which is to stop needing to put the puzzle together). I see the struggle as a fun challenge. Sometimes I love help and sometimes I would prefer to sit by myself because I want to work it out on my own.

This can be a mixed blessing if I am at work and you are relying on me solving a “puzzle” though. Perhaps you have insights to share when I want to be left on my own to solve the puzzle. This might involve you getting my permission to receive your help. In coaching we call this “contracting” and it is an important first step to any coaching conversation.

If we are not aligned on whether (and how) I want you help or the help you plan to give me, then we are off to a bad start.

Once we agree that you will help me to solve my jigsaw puzzle, you can share observations and listen to my angst as I try to work things out. Our coaching conversation has begun.

But this is where Mindset tax comes in. You might ask questions and I might answer them, but I am saying what I think you want to hear (you suggest I start with the corners of the puzzle and I say you are really insightful, when what I really think is that I cannot find the corners and your comments are not helping). This wasted conversation is a “tax” on the coaching effort since it is wasting time that could be spent on helping me to solve my puzzle.

Specifically mindset tax is the effort we must spend to overcome my fixed mindset – the belief that I cannot change. Until we tackle my lack of belief that change is possible, it is unlikely that I will change. You will ask good open questions and I will give the answers that need to be given, while not learning.

Some examples of mindset tax

I have stolen the term “mindset tax” from people who coach teachers. It is generally used to refer to the “four horsemen of the fixed mindset (apocalypse) and you can get a good description of it here.

I think of it more broadly though so I have added a fifth horseman. Let me run through what I see as the chief distractors of coaching, or the common forms of mindset tax.

Should not want

One of the most common impediments to growth that I encounter when coaching is people thinking that I am there to help them with what they should be doing or should be better at.

People want to know “What is the Correct version of Scrum to apply here” or they don’t know how to influence executives to turn up for their meetings and want me to tell them how to do it.

There is a time for me to instruct people in how to do something, but that is not coaching. That is instruction, or process improvement or process adoption … or mindset tax.

It is mindset tax in two cases:

  1. I think I know better than you do how you should do your job. I start lecturing you or “coaching” you to be more like me.
  2. You are trying to do the right thing, as defined by others, rather than deciding what
    • You personally think the challenge or opportunity is for you to tackle;
    • What you would personally like to see happen; or
    • What you can learn from this.

In the first case, where I allow my ego act as a tax on our coaching, the solution is for me to move to a coaching stance (listen to understand, reflective listening, feedback based on observation etc). If I want to be a good coach then I need to develop good “tax minimisation” routines. That single concept is probably worthy of a whole book on coaching.

On the other hand, if I really do think that I know better than you and want to tell you what to do, then maybe I should not call it coaching, as such. It might be “performance coaching” in the way some HR people in Australian refer to the process of me setting clear expectations of your role and then you either agreeing to meet them or be fired from your job. It might be management or instruction where you are will to learn by doing what I tell you to do and then seeing if it works or it might just be me bossing you around.

Where you feel the need to meet my standards (or the expectations of your mother, or the expectation of being a Steve Jobs) then the answer if probably for me to use reflective listening and to help you see your own story clearly so that you can decide for yourself what to do with these expectations.

The four original horsemen

So, let’s get back to my jigsaw, or better yet to an area where I still fight having a fixed mindset. Let’s say that I want to get better at cooking (which I do) but that I truly suck as a cook (which I do). You offer to help me learn to cook and I am grateful for the help, though to tell you the truth I doubt I can actually become a great cook (which is not true but is a bias I have).

When you ask me about my cooking I use jokes to distract from the conversation because I have a fixed mindset. You help me to see this and we agree that I will cook while you observe and give feedback, both as I cook and as you eat the result of my cooking.

I start to cook a meat pasty, following the recipe. I look to you for advice by you say “I am coaching not telling” and I persist with a little nervousness.

I successfully cook some meat and vegetables in some pastry and it looks roughly like a pasty should look. Unfortunately it tastes a bit bland.

You could say “that is awesome and yummy – keep doing that” but that will not help me. Instead you say that you find it a bit bland and that maybe some more sauce would help or maybe some salt or something.

You are right – I suck

The first horseman of the fixed mindset is “you are right, I suck.”

In this case I would hear your feedback (this is bland) but would hear it as confirmation that I am a bad cook. I might reply that I really screwed up and that I always cook bland things.

This is an invitation for you to feed my doubt and spend the session trying to enable my whining. OR it is a chance for you to expose my mindset tax so that we can minimise it and get back to my goal of improving.

By listening and reflecting on what I say, you might help me a whole lot – paying off the tax and allowing us to focus on growth. This will happen even faster if we both understand the concept of mindset tax and can spot it quickly – then you can just call it out.

You are wrong – I rock

The opposite of deciding I suck at cooking because this pasty was a bit bland is to dismiss your feedback as wrong or irrelevant so that I can feel good about my cooking. Again – if I have a fixed mindset, your feedback is about whether I am a good or bad cook, not about whether my cooking of pasties can be improved so I can become a better cook.

I might say that I like bland pasties and that your taste is just different to mine. I might deflect to talk about how I got the pastry cooked well and that the shape was kind of pasty-like. These points might be things that I can build on, but I am raising them as a way to avoid confronting the blandness of the pasty.

Again you can listen reflectively to me or you can restate your feedback in different ways. This is, however, taking time away from a potential discussion of how to predict and reduce blandness, or how to add some more flavour. If we both know to watch out for this mindset tax then we can call it out, put it aside and consider the feedback you are giving on its own merits.

Blame it on the rain

At work, I often find myself being asked to coach “them.” By that I meant that the person I am coaching wants someone else to change or someone else to “just get it.” This might be fair enough, but it is a distraction from the discussion about how the person I am coaching might deal with the other people or behave in a way that changes the situation. I often remind people of this by saying that “They might (blah) but I cannot coach them if they are not in the room, I can only coach you. What is the challenge here FOR YOUY?”

More generally though, this is a case of “blame it on the rain.” This, our next horseman is where I blame an issue on one off circumstances or factors that are not in my control.

There is an element of truth in what I say – for example if it was raining then that might have impacted what I did.

In the case of my bland pasty, it might be true that I had a brand of “Worcestershire sauce” in my pastry. It might also be true that the oven was not a good one or that the vegetables that I was forced to use were a bit old. All of these might be true and I might blame them for the blandness of the pastry. However if I just say this to deflect from thinking about what I will do next time then I will not try something new and will not change.

When coaching, it is common to have people blame the rain (the workshop room, the group being tired, the sauce being bland) rather that question their own actions and contribution to the outcome. Similar to “I rock” they are discussing everything except the thing they could change.

I guess you know the solution here then – reflective listening and calling it out as an observation.

Optimist without a cause

The last horseman is one that sounds harmless and that I am often a victim of, but it is one that inhibits change and it relates to saying what I think I should say rather than thinking deeply about my cooking (or coaching)>

The optimist without a cause is the person who accepts that feedback or states their own insight and then says they will change, but has no plan to do so.

I might say that I agree that the pasty is bland and thankyou for the feedback.

“Next time,” I say, “I will add more flavour.”

Unfortunately though, I won’t. I have no plan for how to improve and I have not really decided to do anything. We can end out cooking session on a high note, but with no commitment to do anything concrete.

This is often where the coach needs to end the session, ensuring that the cook (coachee) has really absorbed the new insight, or committed to a new action or experiment. When you call out my optimism, you can then ask “so, what specifically could you do differently?”

Conclusion

Coaching is good and it is really rewarding. However, the people who most need coaching are generally the people who also have the best defenses to avoid change.

I like to discuss the concept of mindset tax with people (and teams) that I coach, so I can call it out when I see it and we can minimise it. Then, when coaching, I practice spotting the taxes as they appear.

If we can reduce these 5 things then growth will be faster (and I might move from bland pasties to decent pasties and even pies when I cook):

  1. “Should do” thinking versus “what I really want” thinking
  2. You are right – I suck
  3. You are wrong – I rock
  4. Blame it one the rain
  5. Optimist without a cause