Thursday, November 16, 2017

Live from Potsdam, Day 3 - Agile Testing Days 2017

Morning came at the usual time in Potsdam on November 16, however many of the conference attendees missed its arrival as they had been having much fun following the Agile Games and ATD Cabaret. Much singing and music and jokes and fun was had, and SOME (ahem) were feeling the effects a bit more than others....

Still, that is one of the aspects of this conference, and many conferences. People getting to know each other, swapping stories, asking questions, sharing ideas and sometimes sharing advice.

One aspect of this, beyond the evening events and other random conversations that grow organically at events like this, is Lean Coffee (introduced at ATD by Lisa Crispin & Janet Gregory). I mentioned this in the Day 1 blog and today, after inhaling a quick breakfast (someone was up later than intended... ahem) I wandered in to see how Lean Coffee was going, and was pulled into moderating a table. OK! Dive in head first!

What is Lean Coffee? It is a time blocked method of discussing ideas, problems or exploring questions where people bring topics, vote on them, then discuss them until a resolution is reached OR the energy for the topic has been expended. The idea is to share ideas, investigate things the original person had not considered and come to a take away.

**

I've moved into the main hall for the morning events. Jose is going over "lost & found" items (including someone's tablet - THAT will be missed!) Ash Coleman is talking briefly about the events later today around inclusion and diversity. AND we're getting ready for this morning's keynote, The Pothole of Automating Too Much with Paul Holland.

Paul is kicking off his message that (for the opening aspects at least) is challenging the conventional wisdom held b y many: Automated tests are not likely going to give you all the benefits that many people believe it will.


Automated scripts will only look for the things it is programmed to look for. It takes a vigilant human to find other things. He uses a metaphor around the roads of the United States for software.There are so many raods (paths) we can't really "test everything.


Right.

Problems - 
There are more kinds of problems that an automation can be programmed to recognize...
  -  Vigilant testers can observe and evaluate a very large variety of outputs and also vary the inputs let us find more problems than we can predict.

There are more checks to automate than can possibly be written;

Some things are too difficult to automate effectively;
  -  Complex pass/fail algorithms;
  - perhaps it can be dome more quickly by a human;

Investigating reported failures takes a long time;

Automation is expensive to build and maintain;
  - High cost to value ratio;
  - Sunk cost problem (so much spent so far, abandoning it is hard)

What about -
A strategic mixed approach:
  - Automate critical paths;
  - Automate paths with the highest use;
  - Do NOT write automation for all failures found in the field;
  - Consider the cost of automation vs benefit -
     Difficulty to create;
     Difficulty to maintain (frequency of changes)
     Difficulty to analyze failures;
  - Augment automation by performing human testing
  - Automation is excellent at showing that the code CAN work;

**

Grabbed some lozenges, and a coffee and headed into Toyer Mamoojee's session on "Coexistence of Legacy and Cutting Edge Technology Systems". AND - the conference staff bring in a birthday cake for him! Yeah. these folks are awesome.


I met Toyer Monday evening, after having been told I needed to find him and have a chat - and we did! Bright tester from Cape Town, South Africa. I am really looking forward to this.


Here we go -

For this discussion Toyer is defining legacy systems as:
Business Critical;

Lack of support;
Skills shortage;

... and a couple others I was too slow to type...

so why keep them?
Customized over time to the Org;
Change could negatively impact business

Their environment looks a bit like this:


Problems?
Oh yeah...

Deployments are manual (very manual);
Achitecture - Too much logic build into legacy systems to easily update/decouple;
Processes - Slow running processes, including tewsts;
Delivery Process - Some teams less Agile than others...
Skills - Different systems required different skills.


Solutions -

Their approach (over time and not this neatly - Pete comment - they never are...)

Deployments - Automate, automate & Automate;
Achitecturally - Built APIs, Micro-services, Rewrite complex batch processes;
Processes - Faster Running processes, quicker test execution;
Delivery Process - Functionality, Freeze, E2E meetings, SOS (Scrum Of Scrums)

Skills - Internal vs External recruitment (right skills for the right system)

Testing -

Automated Testing -
  Automate what you can with commercial tools for legacy;
  Open Source for Cutting Edge/newer tech
  Push to get CI/CD;
  Automate at different levels!

E2E Cross System -
  Centralized E2E testing tool;
  Daily SOS meeting
  E2E Meeting before stories get developed (measure the impact on teams)

So, what was the result?
Originally, they had Excel for test case tracking  & QTP.
and now...



And the result?
Cross skilled workforce;
Everyone engaged, relevant & up-to-date;
Get the right people on the right system;
Maintain/expand buisness competitive edge with less risk;
Faster delivery time - monthly releases instead of quarterly

**

Some good chats, dealt with some minot work items AND slide into "An Alien Visiting MY Office" with Viktorija Manevska. (a little late... sorry)

Starts a new job - AWESOME! And she hears they are going to do Scum - AWESOME! And she learns how they are doing it - AWESo... uhm

So, she went and took on the task of reading and learning about Scrum. She found the typica; "THIS WILL SAVE THE WORLD" articles and then some that were less than enthiusiastic.
So, she looks at what is around there. And how people look at things -
So, since she knew that things weren't quite right - so she decided to get a certification (cultural thing - certifications lend greater authority - your mileage may vary).
Except, things still weren't quite right. Stuff wasn't happening the way it seemed like they were supposed to be.

Except - Scrum is not a problem fixing framework, It is a problem FINDING framework.

Key question...
Is the value of the product developed by Scrum?  -  Hmmmmmmm

Except Scrum is a bit like the blind men and the elephant...You see scrum based on how it is exposed to you. Hmmmmm.

So, if we know what the customer wants and we have some ideas on how to make this happen, does the development method/model really matter?

So, if we focus on the end result, what can happen? Working together, not worrying about the ritual forms, what if we share ideas and collaborate and not worry about what to call it? Instead of doing a daily stand-up, particularly when people are all working in the same room, what happens if we just do stuff?

THEN she lands a new position (same company) and finds out that.... not so happy. The rainboes & unicorn thing was more a tangled mess. She began looking at items, making mind maps and researching the intent of the such things and.... "that is not your job - you are a tester."

Quotes Paul Feyerabend - "The only principle that does not inhibit progress is: Anything goes."

So, she let them go and do their thing... Allowing people to fail, then supporting them when the problems arise can help them begin to change their minds from fixed positions to a more flexible one. By working with them to find shared spaces - they began coming to her to find potential problems before deployment, then pairing then... everywhere.

By looking at things from an external viewpoint, like an alien, she was able to see problems, then share the means to help people see things the same way.


**

LUNCH!

**
Slight change in plans. After an amazing chat with Cassandra Leung over lunch, have I mentioned how remarkably bright she is? Right - I must rest a bit... and deal with stuff from work - yeah, day-job stuff. I hope to spend some time in the open space shortly.

See you all in a bit!

**
And Congratulations to Jose Diaz - the master behind the conference - for officially becoming a German Citizen today!
**

Sometimes taking a nap is an excellent thing. Other times, having a nice relaxing cuppa (or 3 or 4) and conversation with interesting people Can be just as good, if not better. Thanks to Rob Lambert, Alex Schladebeck and Chris George for giving me much to think about.

Open Space is an excellent way to explore ideas and have a chat with people about items you may not have considered.
**
Sliding into the LAST track talk of the day (refreshed and ready to go!)
Our Journey to Reduce Manual Regression Tests - presented by Thomas Fend and Trinidad Schmidle.

They work at Bachmann Electronics, a company with a broad product portfolio, long lived products, a development team of around 40 developers and 12 testers. The majority of their software products are released at the same time, typically once a year.

There is a fixed issue verification system that tends to go straight to the tester. The were massive numbers of bugs being reported, however, when the development team reported a as "fixed" it went straight to the test team, not to the person who reported or raised the bug.

By changing that loopm so the person who raised the bug had first crack at the fix, or at the lease, had the change explained to them, they were able to streamline the process and reduce the amount of rework. THIS lead to more time to work on more new features.

Now, with a common acceptance criteria definition, the team & product manager are in agreement as to what needs to be done.

Then there was the "feature veto" issue - Features are completed very lat with little time for thorough testing. When everything else seems to be better, the tester might raise bugs, get very unpopular and get told to leave them alone.

By getting testers involved early in the process, this has been reduced.

Then there was automation. Yeah. Some tests are executed each sprint. The BIG ones are run after feature freeze - once a year. Not all the tests are fully automated - there would be manual configuration before starting some tests. Because, well.... yeah.

Each sprint, testers present test results in the test department. As an incentive, the number of tests being run each product cycle, through automation.  Result was increasing to around 65% of the tests.

This helped improve stability, helped find bugs when there was time to fix them, and help find and fix "blocker" bugs.

There was limited time to make regression test really happen. So, they pulled in developers to help. This was PL, but...

By working together, working on some basic action - the time to regression test dropped dramatically. They also reduced distractions during "test weeks:, no meetings, etc.,

By introducing a mix of static and dynamic code analysis to check out the changes that have been made, they were able to restrict the interruptions, limiting meetings, encouraging test pairing, etc., They were able to reduce the amount of effort to do regression tests and do them far more efficiently.

Conclusion - Through communication, teamwork, detailed planning and focusing on things needed RIGHT NOW, they were able to make this improvement where they can run the full suite in under 2 weeks!

***

And NOW the lights come on in the room. It is REALLY hard to see to type with no lights (ahem)...

Right - something else is coming up, - AH yes - the closing keynote of the day with Liz Keogh.

**
Jose is back up with a few announcements and introducing Liz...

Liz Keogh will be speaking on "How to Tell People The Failed and Make Them Feel Great."

And asserts that Failure is Essential. Whilst on a project, a couple of developers had tried to install a tool to help do testing. It took a couple of days and they gave up. And got yelled at. And new rule was implemented that NO one did anything without getting permission from the BAs. Except - they made that rule for EVERYTHING.  All creativity ceased. And it became a far less fun place to work.

Enter Cynefin - (the new version)



Obvious Domain - sense-categorise-respond ; Best Practice(s) -
Complicated Domain - sense-analyse-respond ; Good Practices ;
Chaos (Chaotic) Domain - act-sense-respond ; No effective constraint ; Novel Practices
Complex Domain -probe-sense-respond ; Enabling constraints ; Emergent Practices ;

Dan North - deliberate discoveries
Unknown unknowns - 2nd order unknowns

You failed - but we expected to because... and that's OK...

We want to help people improve. Before we go nuts with this, consider what do you like about the person - how about respect them? What is it that anchors the relationship? What is it that  you value them for.

There's the sandwich thing - where you put "the problem" between 2 good things. That's ok - sometimes...

But why - if they screwed up, they probably know it. Let them know that you know, and that they have succeeded in the past and will be awesome again;.

Radical Candor/Care - come out and tell people something is wrong, but come from a place of care. Let them know the person is valuable and valued, and they can make some small corrective action and be awesome. Because SOMETIMES not doing that will lead to much bigger failures.

Failure only "hurts" because of the impact. Safety lines can help - and can help people explore safely. Always work toward buying time - making time - by avoiding the trap of "running out of time" - allow yourself options, just in case.

Remember that "root cause" is misleading. There are usually (often) multiple contributing issues. Sometimes they may not be obvious. What can these things be?

Try scattering around ideas. They MIGHT just land and be able to be acted on immediately. OR, they may lay dormant for a long time, until the context changes. THEN it can take root and grow.

Etsy - home of the blameless review. This is an interesting idea - let's see - often times this gets translated as "the testers screwed up." One thing in the "blameless reviews" - or "learning" reviews - instead of  "who did what..." try "what happened" and "when were decisions made."

Take the blame out and look at the steps and decisions made. That might show opportunities to isolate the issue and act as a fail-safe. You will always miss SOMETHING - so mitigate the risk.

Most organizations struggle with Agile stuff, because stuff is hard. Changing directions can be a challenge. Supporting the ability to change direction is a challenge.

-- The best way to tell someone they failed is to not even mention the failure.
Come from a place of care. Anchor what is valuable. Show what is possible - the bright future ahead. Work on building safety nets.

All of this is part of solid, positive growth... unless England is playing in the football match

**

There are a couple of other items coming up this evening, but first, supper!

**










2 comments:

  1. Great write up and perfect summary of my talk along with a summary of the day. Great work Pete.

    ReplyDelete
    Replies
    1. Thanks man, I appreciated your talk a great deal. Thanks for the comment and I'll see you soon.

      Delete