Daily Archives: May 2, 2017

Testing Wearables:  The Human Experience

I became interested in testing wearables in a rather unusual way.  I ran the Boston Marathon.  So, you ask, what does the Boston Marathon have to do with wearables and the testing of them?  Well, every runner was wearing at least one “wearable”.  Wearables are electronics that can be worn on the body as an accessory or a part of one’s clothing.  One of the major features of wearable technology is its ability to connect to the Internet, enabling data to be exchanged between a network and the device.  Often they contain monitoring and tracking functionality.

Wearables have become a part of most runners’ gear; they wear sports watches with GPS functionality and often carry smart phones.  Yet every runner in the 2011 Boston Marathon also had a wearable attached to their clothing, a bib with their name and registration number.  Today, the bib also contains an RFID chip.  The chip records the runner’s exact race time by connecting to a series of mats with RFID readers at the starting line, along the course and at the finish.   The first time it was tried there was only one glitch, not all of the RFID chips registered with the readers.

Although this failure did not create a life-threatening situation, it created a great deal of consternation and disappointment among those runners to whose race did not get recorded.  For runners who had run a qualifying time and/or a personal record, their elation and joy at the finish line turned to grief and anguish when they found out that their times did not register.  And yes, I was one of those runners.

As a tester, I began to question not only what had and had not been tested, but also I became keenly aware of the impact that the failure of this wearable had on the user.  I realized that what all wearables have in common is that they have a purpose or function, coupled with human interaction that provides value by enabling the user to achieve (is this correct?) a goal.  Unless the runner ran the course and stepped on the mats, the chip in the runners bib would have no way of providing any value.

This analysis led me to realize that the human user must be an integral part of the testing.  Furthermore, the closer a device becomes (integrates?) to a human, the more important the human’s role in testing becomes.   When a networked device is physically attached to us and works with us and through us, the more important the results of the collaboration becomes to us physically and emotionally.  From this experience, I devised a framework for testing this collaboration which I call Human Experience testing.

Advertisements

The Brave New World of COTS Testing

Testing a COTS system?  Why would we need to test a COTS package?  Often, project managers and other stakeholders mistakenly believe that one of the benefits to purchasing COTS software is that there is little, if any testing needed.  This could not be further from the truth.

COTS, Commercial Off-The-Shelf software, are applications that are sold or licensed by vendors to organizations who wish to use them.  This includes common enterprise applications such as Salesforce.com, Workday, and PeopleSoft.  The code delivered to each purchasing organization is identical; however there is usually an administration module through which the application can be configured more closely match the needs of the buyer.  The configurations will usually be done by the vendor or by an integrator hired by the purchasing organization. Some COTS software vendors also make customizations, which involve changes to the base code, to accommodate purchasing organizations.  SaaS, Software as a Service, products are usually COTS software.

Testing COTS software requires a different focus from traditional testing approaches.  Although no COTS package will be delivered free of bugs, the focus of testing from the purchasing organization’s perspective is not on validating the base functionality.  Since the COTS software is not developed specifically to meet user-defined requirements, requirements-based testing is not straightforward.  In order to plan the testing effectively, test managers and testers need to focus on the areas where changes in the end to end workflow are made. The major areas of focus for COTS software testing include customizations and configurations, integrations, data and performance.

The focus of traditional functional testing when implementing a COTS package is on the customizations, if any and the configurations.  Customizations, since they involve changes to the actual code, carry the highest risk; however, configurations are vitally important as they are the basis of the workflows.  Testers need to understand what parts of the workflow involve configurations versus base code or customized code.  Although the integrators sometimes provide this information, often the test team must obtain it from vendor documentation.   Often business workflows will need to change in order to achieve the same results through COTS software and testers must consider this as they develop their test cases.

Integrations are a critical area of focus when testing a COTS package.  Often COTS software packages are large Customer Relationship Management or Enterprise Resource Planning systems and as such, they must be integrated with many legacy systems within the organization.  Often, the legacy systems have older architectures and different methods of sending and receiving data.  Adding to the complexity, new code is almost always needed to connect to the COTS package.  Understanding the types of architectures and testing through the APIs and other methods of data transmission is a new challenge for many testers.

Data testing is extremely important to the end-to-end testing of COTS software.  Testers must understand the data dictionary of the new application since data names and types may not match the existing software.   Often, as with configurations, the testers must work with the vendor or integrator to understand the data dictionary.  In addition, the tester must also understand the ETL or the extract, transform and load mechanisms.  This can be especially complicated if there is a data warehouse involved.  Since a data migration will likely been needed, the data transformations will need to be thoroughly tested.

ETL testing requires a completely different skill set from that of the manual, front-end tester.  Often, the organization purchasing the COTS package will need to contract with resources that have the appropriate skills.  SQL knowledge and a thorough understanding of how to simulate data interactions using SOAP or XML is required for data testing.   An understanding of SOA, Service Oriented Architecture, and the tools used to test web messages is also quite helpful.

Performance testing is another area requiring a different approach.  Many systems, especially web applications, require a focus on load testing or validating that the application can handle the required number of users simultaneously.  However, with large COTS applications that will be used internally within an organization, the focus is on the speed and number of transactions that can be processed as opposed to the number of users.  The test scenarios for this type of performance testing can be huge in number and complexity.  Furthermore, the more complex scenarios are also data intensive.  This testing not only requires testers with solid technical performance test skills, but also requires a detailed data coordination effort with the integrations

From beginning to end, testing the brave new world of COTS software requires a completely different approach focusing on configurations, integrations, data and performance.  This new approach offers new challenges and provides opportunities for testers to develop new strategies and skill sets.