How We Tested Web Survey Software

We installed each product onto a Windows 95 system, then used whatever tutorials, wizards, templates, and question libraries were available to construct our questionnaire. After posting the resultant survey questionnaires to an Apache Web server on a test intranet, we ran e-mail responses through Microsoft Exchange (which we also used to test the products' e-mail distribution) and whatever facility each program used to incorporate responses into its database. From there we ran the numbers through the available analytical, reporting and presentation tools.


For our installation and configuration score, we looked at how effortlessly and flexibly we could set the program up for use. To award a score of satisfactory, we had to get the application up and running without extensive manual configuration. Extra points were awarded for the capacity to work with multiple e-mail systems and for configuration options to help manage survey traffic. Points were subtracted when it was necessary to download additional components.

Questionnaire Design

Questionnaire design was scored on a three-part basis: ease of use, including content assistance; structural features; and HTML formatting options. To receive a score of satisfactory, the program had to provide tools to generate an acceptable Web form. An exceptionally useful interface, large question library, data-validation options, question branching capabilities, and the ability to add HTML features and external files to the Web pages all raised the scores.


We rated each program's analysis and reporting capabilities by examining the tools provided for calculating survey responses and generating reports of those responses. To earn a score of satisfactory, a program had to provide quick means to view useful summaries of responses. Extra points were awarded for data-slicing capabilities and for the capacity to do more sophisticated analyses, especially if these are automated. Extra presentation capabilities-such as slide show presentation-also earned extra points.


At a minimum, documentation had to tell us how to install the program and make use of the program's features. Comprehensive, well-organized and well-written manuals received higher scores. We lowered the score if the manual was poorly organized, lacked a table of contents and index, did not include information or contained factual errors in the text.

Technical Support

We based technical support scores on the quality of service we received during multiple anonymous support calls. Busy signals, voice mail-only service and excessive resolution times all resulted in lower scores.

Support Policies

To receive a score of satisfactory, the company must provide telephone technical support. We awarded bonus points for unconditional money-back guarantees, extended support hours, bulletin board support (such as CompuServe) and a toll-free number. We subtracted points for no technical support or a limited support period.


Programs that cost less scored higher in this category. We rated program prices for departmental-level project managers according to the following ranges:

$0 to 200: Excellent

$201 to 500: Very Good

$501 to 1,000: Good

$1,001 to 2,000: Satisfactory


  • Comment
    customer experience (garagestock/

    Leveraging the TMF to improve customer experience

    Focusing on customer experience as part of the Technology Modernization Fund investment strategy will enable agencies to improve service and build trust in government.

  • FCW Perspectives
    zero trust network

    Why zero trust is having a moment

    Improved technologies and growing threats have agencies actively pursuing dynamic and context-driven security.

Stay Connected