Author Archives: gerieowen

Check out my upcoming Webinar

Gerie Owen to Present Webinar
Webinar: How Did I Miss That Bug? Managing Cognitive Bias in Testing
How many bugs have you — or your teams — missed that were clearly easy to spot?
Testers approach all phases of testing hampered by their own biases in what to look for, how to set up and execute tests, and how best to interpret the results. Understanding how your biases, preconceived notions, and ability to focus your attention are the keys to managing cognitive bias in test design, test execution, and defect detection.
Join the webinar
In this July 11 webinar at 11am PDT, Gerie Owen will give testers and test managers an understanding of how testers’ mindsets and cognitive biases influence their testing. With over 25 years of test-driven development experience to tap into, Gerie will provide tips for managing your biases and focusing your attention in the right places throughout the test process so you won’t miss that obvious bug.
This webinar presentation uses principles from the social sciences — such as Kahneman’s framework for critical thinking and Chabris and Simons’ findings on attention, perception, and memory — and short, enjoyable exercises on preconceived notions. With Gerie’s help, improve your individual and test team results.
What the participants will learn:
Why we aren’t as smart as we think, i.e., how we develop biases and preconceived notions.
How biases and preconceived notions negatively impact our approach to testing throughout the test process.
How to design a test approach to effectively manage the way we think during the test process.
Ways managers can increase their teams’ effectiveness by improving their focus.
Tips for finding the obvious bugs you are missing.
Main Message:
Become a top-performing tester by understanding your biases. With Gerie Owen’s tips, you’ll learn the keys to great test design, test execution, and effective defect detection. Register today!

Understanding How Assistive Technologies Make Products Accessible

 

Hi Testers, welcome to the brave new world of assistive technology.  Assistive technology is a term that refers to all types of technological devices that enhance the quality of life and improve independent function for people living with disabilities.  Assistive technology is available for people with visual, hearing, mobility disabilities as well as cognitive impairments.  Assistive technologies range from low-tech devices that assist with daily living activities such as eating or showering to high-tech readers for the blind and listening devices for the deaf. 

How does this fit into the category of software or device testing?  In many cases, testers are responsible for accessibility testing of new products, including web applications, personal fitness devices, and hardware/software products for finance, transportation, and other areas.  Knowing how the assistive devices work, and understanding how they are tested, is an essential part of understanding the requirements of accessibility.

My friend Max introduced me to his world of assistive technologies when I visited him recently.  He is legally blind and has an impressive array of assistive technologies in home office.  His devices range from lighted magnifying glasses to the Optelec Reader and Zoomtext.  The Optelec reader will not only magnify pages of magazines but also scan and read them aloud.  Zoomtext is a software program that will enlarge, enhance and read aloud everything on a computer screen.  Using Zoomtext, Max is currently writing a book.  

Max is an avid reader and when the Library of Congress digitized books, Max was a beta tester for the various reading devices.  Max uses his phone to dial into the National Federation for the Blind’s Newsline to keep up with current events.  He has access to thousands of newspapers and he can listen to articles of his choice.  I was so impressed with how much these technologies improved Max’s quality of life.

After my assistive technologies demo, as a tester, of course I became interested in how and who would test these devices and software programs.  My initial thoughts centered on accessibility testers who can apply their knowledge of specialized accessibility test techniques that they use to determine levels of usability by people with disabilities.  Accessibility testing is critical for websites and software programs, yet actually testing assistive devices requires something more.  More than any other type of device or software, assistive devices and software must be designed and tested based on the needs of the user.

Since usability testing is so critical for assistive technology, I realized that human experience testing so as applicable here as it is to wearables.  I recently developed a framework for human experience testing that I’ve presented at several testing conferences.  Human experience testing goes beyond usability in scope, depth and approach.  The closer the device becomes to the human, the more important “Human” Testing becomes. 

The Human Experience testing framework uses personas and user value stories to test the interaction between the person and the device.  Personas are detailed descriptions of the archetypical users who represent the needs and motivations of the user group.  They represent the motivations, values, expectations and goals for their interaction with the device.  User Value stories describe the ways in which users interact with the device.  They are based on how the users go about their daily lives.  Since people with disabilities depend on assistive technologies in the daily lives, human experience testing is critical.

The use of personas in assistive technology design is happening today. The Assistive Technology Industry Association (ATIA) is currently working with Jeff Higginbotham, PhD,  professor at the University of Buffalo on promoting persona-based design for assistive technologies and Microsoft is pioneering the concept.  So it follows that testing should involve the human experience.

I believe that testing assistive technology requires not only special test techniques, but also, special testers.   The initial testing of the prototypes and human experience testing can be done by accessibility testers; however, the final user experience testing should be done by those for whom the device is designed; those who will actually use the device in their daily lives.

Testing assistive technology is not only challenging and fascinating, but also, it is rewarding on many levels.  As my friend Max told when he introduced me to his assistive technologies, “Assistive technology makes life a little bit easier.”

To what extent should testers and QA engineers be involved in software design?

Traditionally testers and QA engineers have had minimal involvement with software design.  Design has been the role of the software architect, or team lead, for many years.  Depending on the team, input from testers at this stage of the software development lifecycle isn’t always valued.

But in some circumstances that is changing.  In particular, testers have a real contribution to make when one of the product goals is “design to test”.  Architects who recognize that contributions can come from a variety of sources are soliciting testing feedback when creating an overall design.

And testers have even more design contributions in Agile projects, especially when using Test-Driven Development (TDD).  Testers typically have a more complete picture of user needs, based on their in-depth understand of user stories and interactions with the Product Owner.

Because design is something that grows with the application in Agile, testers can always look at what the developers are doing.  If the team starts letting the design get complex, or difficult to test, it’s time to have a talk with the developers about making the design more straightforward.  It may require a hardening sprint or two, but it will keep the debt down.

For testers, here are some of the things you might consider as you share your expertise with architects and developers.

Do:

  • Provide feedback on design for testability. You don’t want to accumulate testing debt.
  • Get deeply involved in TDD projects. This is your area of expertise.
  • Provide feedback on design decisions during an Agile project.

Don’t:

  • Attempt to give advice outside of your area of expertise.
  • Reject feedback on your design ideas. Everyone has something to contribute.

The Brave New World of Security Testing

Cybersecurity.   We hear about it every day, whether it’s yet another major security breach in the news or a new security initiative within your our own organization such as a directive to change your password more frequently.  We may have been impacted personally by fraudulent credit charges or identity theft or know someone who has.  Cybersecurity affects everyone, both personally and professionally.  Although everyone in the organization is responsible for cybersecurity at some level, security testing is critical.  Whether or not you choose a career path in security testing, all testers should include high-level security test scenarios in test plans.  Testers, welcome to the world of hackers and crackers, the brave new world of security testing.

Hackers, Crackers and Attacks

In order to join the world of security testing, it is important to understand the attackers, the most common types of attacks and how they happen.  Testers, meet the hackers and crackers!  Hackers are people who gain unauthorized access to an application.  Their motives vary from malicious to testing for vulnerabilities.  Hackers who are hired to determine if the application can be breached are often called ethical hackers.  Crackers are malicious hackers who break into an application to steal data or cause damage.

The most prevalent types of attacks are State Sponsored Attacks, Advanced Persistent Threats, and Ransomware/Denial of Service.   State-sponsored attacks are penetrations perpetrated by foreign governments, terrorist groups and other outside entities. Advanced Persistent Threats are continuous attacks aimed at an organization, often for political reasons.   Ransomware locks data and requires the owner to pay a fee to have their data released. Denial of Service makes an application inaccessible to its users.

Some of the usual means by which hackers and crackers attack are through SQL injection, cross site scripting (XSS), URL manipulation, brute force attacking and session hijacking.  Using SQL injection, an attacker manually edits SQL queries that pass through URLs or text fields.  Cross site scripting involves adding a JavaScript, ActiveX or HTML script into a website on the client side in order to obtain clients’ confidential information. With URL manipulation, a hacker attempt to gain access by changing the URL. Brute force attacking requires automation and is used to obtain unauthorized access by trying large numbers and combinations of user ids and passwords.  Finally, hackers use session hijacking to steal the session once a legitimate user has successfully logged in.

What is Security Testing?

Security testing is validating that an application does not have code issues that could allow unauthorized access to data and potential data destruction or loss.  The goal of security testing is to identify these bugs which are called threats and vulnerabilities.  Some of the most common types of security testing include vulnerability and security scanning, penetration testing, security auditing and ethical hacking.

Vulnerability scanning is an automated test where the application code is compared against known vulnerability signatures.  Vulnerabilities are bugs in code which allow hackers to alter the operation of the application in order to cause damage.  Security scans find network and application weaknesses and penetration testing simulates an attack by a hacker.  Security auditing is a code review designed to find security flaws.  Finally ethical hacking involves attempting to break into the application to expose security flaws.

The Challenges of Security Testing

Security testing requires a very different mindset from traditional functional and non-functional testing.  Rather than attempting to ensure the application works as designed, security testing is attempting to prove a negative, i.e., that the application does not have vulnerabilities.  Security vulnerabilities are very difficult bugs, both to find and to fix.  Often, fixing security vulnerability involves design changes and therefore it is important to consider security testing in the earliest possible phases of the project.

Although security testing requires automation and specialized skills, all testers can contribute effectively to security testing.  There are several areas in which testers can incorporate security testing into their functional testing.  These include logins and passwords, roles and entitlements, forward and backward navigation, session timeouts, content uploads and tests involving financial or any type of private information. Simple tests such as ensuring passwords are encrypted, validating that the user is locked out after three invalid password attempts and that the user is timed out after the required number of minutes of inactivity are easy ways of spotting security vulnerabilities.

Testers, if you are interested in specializing in security testing, start by learning to use security testing scanners and tools.  As security testing becomes increasingly more important, the need for specialists in this area is great.  However, it is critical for all testers to support security testing by incorporating security scenarios in our test plans.  Our organizations depend on us to employ our skills through which we think like a user.  Testers, let’s embrace this brave new world and think like hackers!

The Brave New World Of Big Data Testing

As testers, we often have a love-hate relationship with data.  Processing data is our applications’ main reason for being and without data we cannot test.  Yet, data is often the root cause of testing issues; we don’t always have the data we need, which causes blocked test cases and defects get returned as “data issues”.

Data has grown exponentially over the last few years and continues to grow.  We began testing with megabytes and gigabytes and now terabytes and petabytes have joined the data landscape.  Data is now the elephant in the room, and where is it leading us?  Testers, welcome to the brave new world of Big Data!

What is Big Data?

Big Data has lots of definitions; it is a term often used to define both volume and process.  Sometimes, the term Big Data is used to refer to the approaches and tools used for processing large amounts of data.  Wikipedia defines it as “an all-encompassing term for any collection of data sets so large and complex that it becomes difficult to process using on-hand data management tools or traditional data processing applications.”  Gartner defines big data as “high-volume, high-velocity and/or high-variety information assets that demand cost-effective, innovative forms of information processing that enable enhanced insight, decision making, and process automation.”  Big data usually refers to at least five petabytes (5,000,000,000 megabytes).  Sometimes the term Big Data is used to refer to the approaches and tools used for processing large amounts of data.

However, Big Data is more than just size. It’s most significant aspects are the four “v’s”.  Big data obviously has huge volume, the sheer amount of data, however, it has velocity, the speed at which new data is generated and transported, variety which refers to the many types of data, and veracity, its accuracy and quality.

Testers, can you see some, make that many, test scenarios here?  Yes, big data means big testing. In addition to ensuring data quality, we need to make sure that our applications can effectively process this much data. However, before we can plan our big testing, we need to learn more about the brave new world of big data.

Big Data is usually unstructured which means that it is does not have a defined data model. It does not fit neatly into organized columns and rows.  Although much of the unstructured big data comes from the social media such as Facebook posts, tweets, it can also take audio and visual forms.  These include phone calls, instant messages voice mails, pictures, videos, pdf’s, geospatial data and slide shares.  So it seems our big testing SUT (system under test) is actually a giant jelly fish!

Challenges of Big Data Testing

Testing Big Data is like testing a Jelly Fish; because of the sheer amount and its unstructured nature, the test process is difficult to define.  Automation is required and although there are many tools, they are complex and require technical skills for troubleshooting.   Performance testing is also exceedingly complex giving the velocity at which the data is processed.

Testing the Jelly Fish

At the highest level, the big data test approach involves both functional and non-functional  components.  Functional testing includes validating both the quality of the data itself and the processing of it.  Test scenarios in data quality include completeness, correctness, lack of duplication, etc.  Data processing can be done in three ways; interactive, real-time and batch; however, they all involve movement of data.   Therefore, all big data testing strategies are based on the extract, transform and load (ETL) process.  It begins by validating data quality coming from the source databases, then validating the transformation or process through which the data is structured and then validating the load into the data warehouse.

ETL testing has three phases.  The first phase is the data staging.  Data staging is validated by comparing the data coming from the source systems to the data in the staged location.  The next phase is the MapReduce validation or validation of the transformation of the data.  [I think you’re going to have to explain what MapReduce is here.  It’s basically the programming model for unstructured data; probably the best-known implementation is in Hadoop.]This testing ensures that the business rules used to aggregate and segregate the data are working properly.  The final ETL phase is the output validation phase where the output files from the MapReduce and are ready to be moved to the data warehouse.  In this stage, the data integrity and the transformation is complete and correct.  ETL testing, especially of the speed required for big data, require automation and luckily there are tools for each phase of the ETL process, the most well-known are Mongo, Cassandra, Hadoop and Hive.

Do You Want To Be A Big Data Tester?

Testers, if you have a technical background, especially in Java, big data testing may be for you.  You already have strong analytical skills and you will need to become proficient in Hadoop and other Big Data tools.  Big Data is a fast-growing technology and testers with this skill set are in demand.  Why not take the challenge, be brave and embrace the brave new world of big data testing!

The Brave New World of Accessibility Testing

Testers, as you well know, technology, especially the web has opened up new worlds for everyone who uses it.  But have you ever thought about how the ability to access to technology impacts the lives of those with special needs?  Imagine being blind yet able to read, unable to hear or speak yet able to chat or being complete paralyzed but able to travel the world?  Technology has made all this possible for those with special needs, enriching their lives in ways they never imagined.

According to the US Census Bureau, 19 percent of the population in 2012 had a disability and half of these reported a severe disability, and therefore, accessibility testing will continue to grow in importance.  So testers, welcome to the brave new world of accessibility testing!

People with special needs use special technologies including screen readers, screen magnification software, speech recognition software and special keyboards for communication, work and personal fulfillment, yet not all website are user-friendly to special users.  Accessibility testing is defined as a subset of usability testing that is geared toward users of all abilities and disabilities.  The focus of this type of testing is to verify not only usability but accessibility.

So how do we test accessibility?  As with any usability testing, focus on the users.  This means not only users with various disabilities and severities thereof, but also those with limited computer literacy, infrastructure, access and equipment.  We may look for standards against which to measure and well as legal requirements that must be satisfied.  In the United States, Section 508 of the Rehabilitation Act requires that all of the federal government’s electronic and information technology be made accessible by everyone; however, this applies to federal agencies only.  However, the World Wide Web Consortium (W3C), the main international standards organization for the Internet, has created a guideline for making web content accessible to people with disabilities.

Web Content Accessibility Guidelines (WCAG) 2.0

The Web Content Accessibility Guidelines provides recommendations on making web content more accessible to people with disabilities.  It provides conditions for testing in the form of success criteria based on four principles of usability principles.  These are perceivable, operable, understandable and robust.

In order to be perceivable, web content must provide alternatives for non-text content, and time-based media.   Examples include providing options for braille translations and captions for audio or video only recordings.  Content should be able to presented in different formats and foreground should be separated from background for easier reading.

Operability requires that that all actions can be executed from a keyboard and that time limits for actions can be extended.  Flashing should be limited as it is known to cause seizures.  Finally navigation help should be provided in various contexts so that users know where they are in the application and are able to find content.

Understandable content means that it is easy to read i.e., limited use of jargon, abbreviations and it is written at lower levels of reading ability.  In addition, web pages should appear in predictable was and functionality should be provided to help users correct their mistakes.

Robustness means that the web content should be able to be interpreted by current and future technologies including assistive technologies.

WCOG 2.0 goes one step further by breaking down the success criteria into levels of conformance.  Level A is the minimum level of conformance; Level AA includes meets all Level A success criteria as well as the success criteria set at Level AA or provides an alternate version of the web content.  This is the level that is recommended for most websites.  Level AAA is the highest level of conformance and it is not possible for all web content to satisfy its success criteria.

Web Accessibility Testing

How do testers determine if the website under test meets the WCOG success criteria?  The good news is that there are automated tools available for this.  These tools evaluate the syntax of the website’s code, search for known patterns that cause accessibility issues and identify elements on web pages that could cause.  These tools may also find actual and potential accessibility issues.  Interpreting the test results requires knowledge and experience in accessibility issues.

However, as with all types of testing, especially usability, accessibility testing cannot be completely automated.  And it is important that all testers consider accessibility as we execute our functional tests.  For example, try turning off the mouse and track pad to make sure all functions are operable from the keyboard and try turning on Windows High Contrast Mode to see how the application works for low vision users.  And what happens when images are turned off?  Can you still understand the context of the content in the application?  Testers, always remember, our job evaluating the quality of the application and that means ALL users must be able to access and derive value from the applications we test.

Mobile Testing – How Much is Enough?

We all know that mobile apps can never be fully tested.  There are too many devices, too many OS versions, and too many different types of apps.  Testers continue to struggle with developing test plans and test cases the same way they did with traditional applications.  How do testers determine the correct scope of testing for an app to minimize quality and security risks for users and the organization?

First, look at organizational expectations.  Does your organization expect its software to work almost perfectly?  And does it fund and support development and testing to make that a realistic expectation?

If the answer to the first question is yes, but the second one is no, then the best you can do is set realistic expectations that may not be in line with what the organization wants.  If quality isn’t a high priority, then testing can be focused on high-priority areas only.

Second, is the app business-critical?  Does the organization depend on it to make money, service customers, or be more agile?  Or is it a marketing or public relations tool?  Does it provide valuable information to users, or is it simply nice to have?  If it is business-critical, then flaws could hurt the bottom line and quality become a higher priority.  And that’s true in all parts of the app, not just it’s operational aspects.  If users find poor quality in any part, they are unlikely to trust it to do business.

Third, consider the implications of a flaw to both use users and the organization.  A game or other entertainment app often has a high threshold of failures before most users think it’s not worth the effort.  Likewise a free app with limited extrinsic value.

But if the app performs an important function, or one that is counted on my users, a major flaw can have serious or even disastrous effects.  One example is the infamous iPhone time change bug in 2010, which failed to move from Daylight Savings Time on the appointed day.  Thousands of people were reportedly late for work or appointments as their alarm failed to go off on time.  It was an iOS flaw fixed by Apple a few weeks later.

Fourth, does the app use external services?  Many apps make use of other services within the enterprise, or commercial services for information such as weather or sports scores.  While testers don’t have to test the services, testers should read and understand what the service level agreements (SLAs) say about performance and capacity, and periodically test to make sure those are being fulfilled.

 

Now About Security

It goes without saying that security flaws are also quality flaws.  But many security flaws may have significant consequences to both user and organization.  If there is no data, or trivial data, to protect, security testing may not be important.  If it includes names, email addresses, or financial data, or any other identification information, that data needs to be protected at all cost.

In particular, testers have to know what data is being collected, and where data is stored.  On mobile devices, that can sometimes be a challenge, because it can be in internal storage or on a SIM, and it may not be readily apparent that data is being stored, and in any case often can’t easily be accessed.

In the era of hundreds of different devices and OS versions, as well as BYOD, it’s unrealistic to limit the kinds of devices that an app can be used on, even for internal users.  Testers have no control over the device or OS for testing or deployment purposes, and testers simply can’t test all combinations.

But teams likely have control over how and where the device stores data locally.  And they have control over how data is transferred to and from the device, and how it’s stored on the back end.  That’s what testers have to focus on.  Is any personal or identification data encrypted on the device, and is it encrypted during transmission?

Testing on mobile devices requires a combination of techniques, from traditional test cases to risk-based testing to device farm testing and if appropriate, crowdsourcing.  The test plan should be designed to use these techniques to test different aspects of the application.  Test cases, for example, can test traditional quality measures as well as security.  Device farms can be used for in-depth testing of popular devices.

Overall, test results should provide a clear picture of the quality of important aspects of the app given its purpose, and an overview of quality in general.