Deciphering Alerts: Make Sense Of What Your Software Is Saying With Lena Nyström
In This webinar
Deciphering Alerts: Make Sense Of What Your Software Is Saying
You can watch the recording of Part 3 here.
And the two other parts of the series you can watch on PractiTest’s YouTube channel.
If you have used software, you have likely come across messages that made you pause and think “Did a human write this?” Information, errors, warnings and other alerts can be incredibly unhelpful and sometimes even misleading. But why is it so hard to tell us what happened and what we need to correct to do what we want to do? Well, the truth is there are a lot of perspectives to balance and they often clash.
Messages tell a story. A story of choices, of compromise and of the world they live in. They leave a trail of breadcrumbs and following the clues might even show us the way to problems hiding underneath. From the often crisp and immediate messages in frontend validation to the multi-time translated notifications originating from a database or an integrated service, there are lots of hints to pick up on.
In this miniseries, we will look into how different parts of the tech stack deal with validations, errors and other types of information to the user, how to guess which part of the stack they are from and how we can use that to do better testing. We will look at examples, look into their strengths and weaknesses and why it is so hard to design the perfect message. The goal is to level up our testing by better understanding what the system is actually trying to tell us.
Part one gives an introduction, different perspectives and how they might collide as well as tries to give a general overview as to why this is so hard.
In part two we look more specifically into different parts of the stack, their different approaches and characteristics and try to answer why some messages are so much more confusing than others.
Finally, in part three we try to connect the dots and look into how we can use what we learned in the first two sessions to find clues and use them to uncover risk. We look into some examples, why they might be good or bad (or both!), which perspectives they might have focused on as well as where they might have originated from.
Takeaways:
- Characteristics of error handling and validations in different parts of the stack
- How the way we write messages is a product of balancing competing requirements
- How to use clues in messages to uncover root causes
- Using your tester mindset to explore tech and architecture choices to uncover risk