Testing Wearables:  The Human Experience

I became interested in testing wearables in a rather unusual way.  I ran the Boston Marathon.  So, you ask, what does the Boston Marathon have to do with wearables and the testing of them?  Well, every runner was wearing at least one “wearable”.  Wearables are electronics that can be worn on the body as an accessory or a part of one’s clothing.  One of the major features of wearable technology is its ability to connect to the Internet, enabling data to be exchanged between a network and the device.  Often they contain monitoring and tracking functionality.

Wearables have become a part of most runners’ gear; they wear sports watches with GPS functionality and often carry smart phones.  Yet every runner in the 2011 Boston Marathon also had a wearable attached to their clothing, a bib with their name and registration number.  Today, the bib also contains an RFID chip.  The chip records the runner’s exact race time by connecting to a series of mats with RFID readers at the starting line, along the course and at the finish.   The first time it was tried there was only one glitch, not all of the RFID chips registered with the readers.

Although this failure did not create a life-threatening situation, it created a great deal of consternation and disappointment among those runners to whose race did not get recorded.  For runners who had run a qualifying time and/or a personal record, their elation and joy at the finish line turned to grief and anguish when they found out that their times did not register.  And yes, I was one of those runners.

As a tester, I began to question not only what had and had not been tested, but also I became keenly aware of the impact that the failure of this wearable had on the user.  I realized that what all wearables have in common is that they have a purpose or function, coupled with human interaction that provides value by enabling the user to achieve (is this correct?) a goal.  Unless the runner ran the course and stepped on the mats, the chip in the runners bib would have no way of providing any value.

This analysis led me to realize that the human user must be an integral part of the testing.  Furthermore, the closer a device becomes (integrates?) to a human, the more important the human’s role in testing becomes.   When a networked device is physically attached to us and works with us and through us, the more important the results of the collaboration becomes to us physically and emotionally.  From this experience, I devised a framework for testing this collaboration which I call Human Experience testing.

The Brave New World of COTS Testing

Testing a COTS system?  Why would we need to test a COTS package?  Often, project managers and other stakeholders mistakenly believe that one of the benefits to purchasing COTS software is that there is little, if any testing needed.  This could not be further from the truth.

COTS, Commercial Off-The-Shelf software, are applications that are sold or licensed by vendors to organizations who wish to use them.  This includes common enterprise applications such as Salesforce.com, Workday, and PeopleSoft.  The code delivered to each purchasing organization is identical; however there is usually an administration module through which the application can be configured more closely match the needs of the buyer.  The configurations will usually be done by the vendor or by an integrator hired by the purchasing organization. Some COTS software vendors also make customizations, which involve changes to the base code, to accommodate purchasing organizations.  SaaS, Software as a Service, products are usually COTS software.

Testing COTS software requires a different focus from traditional testing approaches.  Although no COTS package will be delivered free of bugs, the focus of testing from the purchasing organization’s perspective is not on validating the base functionality.  Since the COTS software is not developed specifically to meet user-defined requirements, requirements-based testing is not straightforward.  In order to plan the testing effectively, test managers and testers need to focus on the areas where changes in the end to end workflow are made. The major areas of focus for COTS software testing include customizations and configurations, integrations, data and performance.

The focus of traditional functional testing when implementing a COTS package is on the customizations, if any and the configurations.  Customizations, since they involve changes to the actual code, carry the highest risk; however, configurations are vitally important as they are the basis of the workflows.  Testers need to understand what parts of the workflow involve configurations versus base code or customized code.  Although the integrators sometimes provide this information, often the test team must obtain it from vendor documentation.   Often business workflows will need to change in order to achieve the same results through COTS software and testers must consider this as they develop their test cases.

Integrations are a critical area of focus when testing a COTS package.  Often COTS software packages are large Customer Relationship Management or Enterprise Resource Planning systems and as such, they must be integrated with many legacy systems within the organization.  Often, the legacy systems have older architectures and different methods of sending and receiving data.  Adding to the complexity, new code is almost always needed to connect to the COTS package.  Understanding the types of architectures and testing through the APIs and other methods of data transmission is a new challenge for many testers.

Data testing is extremely important to the end-to-end testing of COTS software.  Testers must understand the data dictionary of the new application since data names and types may not match the existing software.   Often, as with configurations, the testers must work with the vendor or integrator to understand the data dictionary.  In addition, the tester must also understand the ETL or the extract, transform and load mechanisms.  This can be especially complicated if there is a data warehouse involved.  Since a data migration will likely been needed, the data transformations will need to be thoroughly tested.

ETL testing requires a completely different skill set from that of the manual, front-end tester.  Often, the organization purchasing the COTS package will need to contract with resources that have the appropriate skills.  SQL knowledge and a thorough understanding of how to simulate data interactions using SOAP or XML is required for data testing.   An understanding of SOA, Service Oriented Architecture, and the tools used to test web messages is also quite helpful.

Performance testing is another area requiring a different approach.  Many systems, especially web applications, require a focus on load testing or validating that the application can handle the required number of users simultaneously.  However, with large COTS applications that will be used internally within an organization, the focus is on the speed and number of transactions that can be processed as opposed to the number of users.  The test scenarios for this type of performance testing can be huge in number and complexity.  Furthermore, the more complex scenarios are also data intensive.  This testing not only requires testers with solid technical performance test skills, but also requires a detailed data coordination effort with the integrations

From beginning to end, testing the brave new world of COTS software requires a completely different approach focusing on configurations, integrations, data and performance.  This new approach offers new challenges and provides opportunities for testers to develop new strategies and skill sets.

To what extent should testers and QA engineers be involved in software design?

Traditionally testers and QA engineers have had minimal involvement with software design. Design has been the role of the software architect, or team lead, for many years. Depending on the team, input from testers at this stage of the software development lifecycle isn’t always valued.

But in some circumstances that is changing. In particular, testers have a real contribution to make when one of the product goals is “design to test”. Architects who recognize that contributions can come from a variety of sources are soliciting testing feedback when creating an overall design.

And testers have even more design contributions in Agile projects, especially when using Test-Driven Development (TDD). Testers typically have a more complete picture of user needs, based on their in-depth understand of user stories and interactions with the Product Owner.

Because design is something that grows with the application in Agile, testers can always look at what the developers are doing. If the team starts letting the design get complex, or difficult to test, it’s time to have a talk with the developers about making the design more straightforward. It may require a hardening sprint or two, but it will keep the debt down.

For testers, here are some of the things you might consider as you share your expertise with architects and developers.

Do:
• Provide feedback on design for testability. You don’t want to accumulate testing debt.
• Get deeply involved in TDD projects. This is your area of expertise.
• Provide feedback on design decisions during an Agile project.

Don’t:
• Attempt to give advice outside of your area of expertise.
• Reject feedback on your design ideas. Everyone has something to contribute.

Testing in a Brave New World: The Importance of Data Masking

As testers today, we face a brave new world. Our conundrum, providing effective testing with less time, is more difficult that it has ever been. Challenges from disruptive technologies such as cloud, mobile devices and big data have taken testing to a whole new level of complexity. At the same time, we are also challenged with the “need for speed” as agile methodologies evolve into continuous delivery and continuous deployment. We can engage in only so much risk-based testing, so often, we are tempted to use production data to speed up the test process. Ironically, those very same technologies make this practice increasingly more dangerous. So what gives?

If production data is also privacy-protected data, our use of it in testing may be illegal. At the very least, it opens up the data for compromise.

Testers must collaborate with security professionals to develop a test data privacy approach which is usually based on data masking. Data masking involves changing or obfuscating personal and non-public information. Data masking does not prevent access to the data; it only makes private data unrecognizable. Data masking can be accomplished by several methods depending upon the complexity required. These range from simply blanking out the data to replacing it with more generic data to using algorithms to scramble the data. The challenge of data masking is that the data not only has to be unrecognizable, but also still useful for testing.

There are two main types of data masking – static and dynamic. The usual approach is static data masking where the data is masked prior to loading into the test environment. In this approach, a new database is created (which is especially important when testing is outsourced). However, the database may not contain the same data or data in the same states as the actual database, issues which are very important in testing.

Dynamic data masking where production data is masked in real time as users request the data. The main advantage of this approach is that even users who are authorized to access the production database never see the private or non-public data. Furthermore, dynamic data masking can be user role specific; what data is masked depends upon the entitlements of the user who is requesting the data.

Automated software tools are required to mask data efficiently and effectively. When evaluating data masking tools, it is important to consider the following attributes. Most important, the tool should mask the data so that it cannot be reversed and is realistic enough for testing. Ideally, the tool should provide both static and dynamic data masking functionality and possibly, data redaction, a technique that is used for data masking in PDFs , spreadsheets and documents. Also, the tool should mask data for distributed platforms including cloud. Here is a brief look at a variety of the vendors in this arena. As with any tool evaluation, organizations must consider their own specific needs when choosing a vendor.

According to Gartner’s Magic Quadrant, IBM, Oracle and Infomatica are the market leaders in data masking for privacy purposes. All offer both static and dynamic data masking as well as data redaction. IBM offers integration with its Rational Suite. Oracle offers an API tool for data redaction and provides templates for Oracle eBusiness Suite and Oracle Fusion. Both IBM and Oracle products are priced relatively high as compared to other vendors.

Infomatica offers data redaction for many types of files and is a top player in dynamic data masking for big data. It offers Dynamic Data Masking for Hadoop, Cloudera, Hortonworks and MapR. Infomatica’s product is integrated with PowerCenter and its Application Information Lifecycle Management (ILM) which makes it a good choice of organizations who use those products.

Mentis offers a suite of products for static and dynamic data masking and data redaction as well as data access monitoring and data intrusion prevention at a reasonable cost. One of the most exciting features of these products is usability; not only are there are templates available for several vendor packages including Oracle eBusiness and Peoplesoft, but also the user interface is designed for use by the business as well as IT. Mentis was rated as a “challenger” by Gartner in 2013.

One of the least expensive products on the market, Net 2000 offers usability as its main feature. Net 2000 provides only static data masking for Oracle and SQL servers. It is rated as a “Niche” player by Gartner in 2013. This tool is a good choice for a small organization with a simple environment.

Data privacy is one of the most important issues facing test managers and testers today. Private and non-public data must not be compromised during testing; therefore, an understanding of data masking methodologies, approaches and tools is critical to effective testing and test management.

Agile Teams: When Collaboration becomes Groupthink

Does your agile team overestimate its velocity and capacity? Is the team consistently in agreement, with little debate or discussion during daily standups, iteration planning or review meetings? Is silence perceived as acceptance? If so, the collaboration that you believed you had may have become groupthink, and that could be a bad thing for the team, and for the project as a whole. Some aspects of the agile team that are meant to foster collaboration including self-organization and physical insulation may also set the stage for groupthink.
Groupthink is a group dynamics concept developed by Irving Janus in 1971. Janus described it as the tendency of some groups to try to minimize conflict and reach consensus without sufficiently testing, analyzing, and evaluating their ideas. Janus’s research suggested that the development of a group’s norms tends to place limits around the independent and creative thinking of the group members. As a result, group analysis may be biased leading to poor decisions.
Groupthink begins in the storming phase of group development as team members vie for leadership roles and team values are established. Symptoms of groupthink that are especially noticeable in agile teams include illusion of invulnerability which may show in unrealistic time estimates and collective rationalization and self-censorship during meetings and team discussions. Stereotyped views of out-groups may show in groups where testing or usability professionals’ views are not valued.

Dealing with Groupthink
One way to mitigate groupthink is by using an approach known as Container Difference and Exchange or CDE. The agile team is a perfect example of a specialized task group. In group dynamics theory, a task group comes together for the purpose of accomplishing a narrow range of goals within a short period of time. Agile teams have the additional aspect of self-organization which is both beneficial and challenging for both the team and its managers.
Since the agile self-organized teams are cohesive units usually physically insulated from the mainstream, they learn agile processes, learn to work together and work to accomplish their sprint goals all at the same time. As much as an agile team is managed by servant leadership, leaders emerge with different personalities, leadership styles and types of influence. All these factors set the stage for Groupthink and can be managed using Container Differences Exchange theory.
Self-organizing agile teams can manage by specifically asking each member of the team to be a critical evaluator and find reasons why a decision is not a good idea or appointing a “devil’s advocate” and discussing decisions with stakeholders outside the team. However, managers need a way to subtly influence agile team dynamics and that tool can be CDE.
Glenda Eoyang developed the CDE theory from her research on organizational behavior. CDE, or Container Difference and Exchange, are factors that influence how a team self-organizes, thinks and acts as a group. The container is creates the bounds which the system forms. For the agile team this is the physically collocated space. The difference is the ways which the team deals with the divergent backgrounds of its individual members; the various technical backgrounds and specializations of the developers. The exchange is how the group interacts among itself and with its stakeholders.
Managers can influence group dynamics by changing one or more of the factors. For example, a manager can change difference factor by adding a team member with a different point of view or personality or the exchange factor can be changed by increasing or decreasing the budget for the sprint.
It’s easy for collaboration to become groupthink in close-knit agile teams. However, both team members and managers can recognize the symptoms, and use team dynamics theory to make adjustments guide the teams back to high performance.

Testing the Internet of Things: The Human Experience of a Watch, a Chip and the Boston Marathon

Mobile and embedded devices, more than any other technology, are an integral part of our lives and have the potential to become a part of us. This generation of mobile and embedded devices interacts with us, not just awaits our keystrokes. They operate in response to our voice, our touch, and the motion of our bodies.

Since all of these devices actually function with us, testing how the human experiences these devices becomes imperative. If we do not test the human interaction, our assessments and judgments of quality will be lacking some of the most important information needed to determine whether or not the device is ready to ship.

“Human experience” testing, or lack thereof, can lead to redesign of software, and sometimes, of the device itself. So what is testing the “human experience”? Although initially, usability comes to mind, human experience testing goes much deeper. Usability testing focuses the ways in which users accomplish tasks through the application under test.

Then the question becomes just how does “human experience” testing differ from usability testing? The answer lies in the scope, depth and approach.

“Human experience” testing focuses on the actual interaction. It involves not only the look and feel and ease of use, but also our emotional, physical and sensory reactions, our biases and our mindsets. It involves testing in the “real world” of the user; when, where and how the user and the device will function together.

Why is “human experience” testing so important to mobile and embedded devices?
Because when a mobile device is physically attached to us and works with us and through us, the more important the results of the interaction or collaboration becomes to us emotionally and physically. .

In conclusion, I’ll share a very personal example. It is a tale of two mobile devices attached to one woman, a marathon runner.

Join me on the starting line of the 115th running of the Boston Marathon, April 18th 2011. I’m standing in my corral, excitedly anticipating the sound of the starting gun. Last year, I surprised myself by qualifying for Boston, only 10% of runners do, and I’m hoping for another qualifying run.
I have pinned on my bib carefully keeping it flat as it contains the chip that will record my race for the Boston Athletic Association. The chip will record my time as I run over mats at various miles in the race. My current time, location on the course and my anticipated finish time will appear on the BAA website and will be texted to my friend’s and family’s smartphones so they can track my progress during the run.

I click on my Garmin watch and anxiously await it’s catching the satellite to start the GPS. It’s ready and the gun goes off. I’m careful to click the start button at the exact moment I step over the starting line to ensure a correct timing. As I run along during the early miles, I check my watch for the pace, to validate that I’m running the speed I’ll need to qualify. As I push myself up Heartbreak Hill at mile 20, I check my heart rate monitor for feedback confirming that I can continue to run my current pace or that I can continue at all. It reassures me that as exhausted as I feel, I’m doing fine.

As I look at the elapsed time on my watch, I confirm that I’m on pace to reach my goal of another qualifying run. As I turn left on Boylston and the finish line is in sight, look at my watch to see that, not only a qualifying run, but also personal record, is in reach! I dig in and give it everything I have left. As I cross the finish line, physically totally spent but emotionally charged, I click my watch off and I see it… My qualifying time and my personal record! The feeling of accomplishment and elation is beyond description!

Now I’m in the car, riding home, just basking in my own glory. My cell phone rings and a friend asks my gently what happened. I hear concern in his voice and wonder why as I tell him about the best run of my entire life. And then he tells me, “Your run isn’t on the BAA website”. My elation immediately turns to grief. The chip, the timing device embedded in my bib, had failed to track my run. The only record of my qualifying run and my personal record is held within my watch. At that moment my watch becomes a part of me. As one runner once said, “the pain is temporary, but the time lasts forever”. And now my Garmin holds the only record of my accomplishment. What if it didn’t save?

Immediately upon arriving home, I go directly to my laptop and download my watch. My heart is literally in my mouth as I wait for the time to come up on the screen, documenting my time forever. And there it is, 3:51:58! My qualifying run and personal record are mine forever. And I will be on the starting line in Hopkinton for the 116th running of the Boston Marathon next year due to the collaboration among my body, my mind my emotions and my watch.

The lesson is that devices that interact intimately with the user require a different type of testing than other types of embedded systems. The closer a device is to the human user, the more it requires human testing; it requires testing the interaction between the device and users’ actions, senses and emotions.

The Challenge of Change

Is your organization becoming Agile?  Is your organization merging or outsourcing?  Are you wondering how, where or even if you will fit?  Are you feeling a loss of control over your work life?  If there is only one guarantee in the world of information technology, or in any work environment, it is change.  Let’s face it; life itself is a series of changes.

So how do we deal with change?  We can take the “ostrich approach”, burying our heads in the sand, pretending that it isn’t happening, or we can face it and embrace it.  We all know and accept that change is hard.  But have we ever thought about why change is so hard?  A colleague of mine expressed it very well yesterday when he said it’s the fear of the unknown.  What you don’t know, you can’t control. 

So the question becomes how do we deal with uncertainty?  We can start by examining our mindset or our attitudes and habits toward to succeeding when there is uncertainty.  In her book “Mindset The New Psychology of Success”, Carol Dweck, PhD. defines two mindsets, fixed and growth.  Those who have a fixed mindset feel that their success or lack thereof, is a result of basic personality traits that cannot be changed.  Those who have a growth mindset believe that success is the result of hard work and see failure as an opportunity to learn.    The good news here is that mindsets can be changed.  Therefore we can start to deal with change and uncertainty by evaluating our mindset and changing our approach toward it.

We can begin by taking control of what we can control.   For example, if your organization becomes Agile, why not learn everything you possibly can about Agile development.  In the process you have a great chance of discovering where and how you will fit.  If your organization is downsizing, merging or outsoucing, yes, you may get laid off and yes, you have absolutely no control over whether or not that happens.  However, you can update your resume, start networking in your field, test the waters by applying and interviewing for positions in your field.  And in doing those things, you will feel a sense of control.  I know; I did it.

In Spencer Johnson, M.D., 1990’s book “Who Moved My Cheese?,” it was Haw who adopted the growth mindset. He followed the example of the mice, Sniff and Scurry, who saw change coming and took early action. He put on his running shoes and headed into the maze to find new sources of cheese.  Hem remained in the fixed mindset.

We can alleviate the fear of the unknown.  First we must willingly embrace the growth mindset, approach change as an opportunity to learn.  Second, we must be willing take the actions necessary to control what we can control within the change.  As “Humorista” Christine Cashen puts in “The Good Stuff Quipes and Tips on Life, Love, Work and Happiness,” we need to “BOOGIE” or Be Outstanding Or Get Involved Elsewhere.”

When you find yourself standing at the edge of a cliff looking into the water below, you may not have the choice to jump or not, but you sure can be ready, willing and able to swim!