Software that is performance-centered is created from the outside in from the perspective of the workers who use it rather than from the inside out as a reflection of data structures and system functionality. It's designed for ease of learning and use rather than just to offer features and perform functions.
How you implement PCD will depend on the structure and flexibility of your organization and the staff, skill sets, and other resources that you have available or are able to acquire. Unless you're working in a start-up or an organization undergoing complete reengineering, chances are that you'll need to build PCD activities into an existing development cycle. You'll need to promote and sell the concept, explain the particular activities involved, form strategic alliances with advocates of related agendas in your organization (such as usability engineering, software engineering, EPSS, ISO, or the Capability Maturity Model), and eventually win the support of senior management. Although it's possible to do PCD on your own, you would also be well served to hire someone or bring in a consultant who has implemented PCD before. This will increase both your credibility and your effectiveness.
In this article I've laid out the steps involved. I've presented them assuming optimal conditions and resources, and when there was a choice, placed activities earlier rather than later in relation to the development lifecycle. You may want or need to modify some of the placements to suit your particular circumstances.
Before reading about the activities, though, I think it would be useful for you to have a sense of where they come from. Understanding the sources of the techniques might also suggest to you related activities that are taking place in your organization, under different names, with which you might to become more familiar and possibly partner.
While PCD as championed by Gloria Gery was developed with and from her work on electronic performance support systems (EPSS), PCD also rests on decades of research on what makes computer systems easy (and hard) to use and the scientific study of how using computers affects people. The former has been systematized into the field of usability engineering, and the latter, though pursued in several academic disciplines, falls most accessibly under the umbrella of Human-Computer Interaction (HCI). Implementing PCD employs techniques developed from this research.
Electronic Performance Support Systems are suites of tools designed to give workers just enough information, precisely when they need it, to enable them to do their work more easily and quickly. These tools are generally add-ons to a base interface, but they are sometimes integrated with the interface and appear to workers to be part of the interface. Consider the wizard that guides you through establishing an account in Quicken. You can't avoid this wizard; you need to establish an account before you can do anything else. So the wizard jumps out at you and forces you to create an account. It looks like part of the interface, but from a construction standpoint it's an overlay.
Electronic Performance Support elements are planned along with the interface. The Interface Design Team decides what belongs on the base interface and what should be developed as add-ons and then integrated with the interface.
There are a variety of ways to classify different types of EPSS elements, some of which involve artificial intelligence. A common one divides PCD elements into three categories:
These elements are important and can enhance your products considerably. The best source of what these systems consist of and what's involved in creating them remains Gery's Electronic Performance Support Systems (1991), the clearest picture of how EPSSs are used is Future Work by Winslow and Bramer (1994), and the most useful websites that focus on these systems are epss.com! and the epss infosite home page. In this article, however, the focus is on the optimization of the base interface. The conceptualization of EPSS elements and their relationship to the base interface is discussed, as is the placement of EPSS in the development cycle, but the creation of EPSS elements is a development concern beyond the scope of this article.
Like software engineering, usability engineering is a discipline that began by examining diverse activities and innovations in actual practice. It then discerned best practices, systematized and structured them, and made an evolving science of them. If usability techniques are already in place in your organization (regardless of whether or not they're performed under the distinguished title of "usability engineering"), building on them is an obvious way to advance your PCD efforts.
Where do you start to learn about usability engineering and its techniques? With the recognition, first of all, that making your products more usable involves far more than lab testing, and that many of the most important techniques precede lab testing. You can glimpse many of them quickly in the Usability Toolbox by J. Thom. If you'd like to go into greater depth, Jakob Nielsen's Usability Engineering (1993) is a readable and well-documented overview of the field. See especially Chapter 4, the Usability Engineering Lifecycle. Nielsen is a major figure in usability engineering. He gives his own recommendations for beginners in Guerilla HCI.
Also extremely informative is Usability in Practice edited by Michael E. Wiklund (1994), which is a panorama of the programs that 17 leading companies use to ensure usability. In providing the histories and structures of these programs together with detailed case studies of their impact on specific products, this volume illustrates the wide gamut of activities and techniques that contribute to usability.
For a panarama of web resources on usability, see my Usability Engineering page.
Human-computer interaction (or computer-human interaction: the two are used interchangeably) is an academic discipline that studies and evaluates the design of interactive computing. It builds on knowledge developed by cognitive psychology, which studies the mental processes behind human behavior, and human factors, which studies how the design of products affects people, and supplements these with systematic study of the practice and effects of design. Most of the academic work on UI design comes from HCI (or CHI). Because of the newness of this entire dimension of study, however, many practitioners have academic backgrounds in cognitive psychology, human factors, and industrial design.
If you're interested in cognitive psychology and human factors dimensions of HCI, I recommend the introduction in Coe's Human Factors for Technical Communicators (1996). The first half of the book presents the clearest explanation I've found of the subject matter and is an excellent introduction for anyone, not just technical communicators. If you're interested in the principles of good design, part two of Weinschenk, Jamar, & Yeo's GUI Design Essentials (1997) is a good place to start.
The largest organization in which professionals with backgrounds in all these areas participate is the Association for Computing Machinery's Special Interest Group on Computer/Human Interaction, or ACM SIGCHI. To get the flavor of the types of issues SIGCHI addresses, take a look at some of the papers from its 1997 conference.
To learn more about the field of HCI, see my Human-Computer Interaction page.
Many of the exciting ways to improve software in Windows environments draws on work that was done in the Mac world a number of years ago. That's both because usability has always been an important part of the Apple/Mac culture and also because the Mac world offered an interface environment similar to that of the versions of Windows that most of us are using now considerably earlier than it became viable in Windows itself.
Fortunately, much of what happened in the Apple/Mac world has been chronicled and published. For the purposes of this article, probably the most accessible volume is Penny Bauersfeld's Software by Design (1994). It presents an overview of the entire process mapped against a simpler software development cycle than most of us work with today. Still, a lot can be learned from her book, and while she deliberately keeps her examples simple, I find the lightness of her style and the clarity of her presentation especially useful in overviewing the procedures.
Bruce Tognazzini's Tog on Interface (1992) complements Bauersfeld. From his position as "human interface evangelist" his official title at Apple he chronicles a wide variety of episodes and issues and addresses a large spectrum of user interface issues. Tog is widely respected in the field. His later volume, Tog on Software Design (1996), is much broader in focus, more visionary, and less valuable for understanding particular techniques.
Finally, The Art of Human-Computer Interface Design (Laurel, 1990), edited by Brenda Laurel, contains original papers by half Apple employees and half others with complementary viewpoints. It's a very influential volume and the papers in it, which were prepared for it and reviewed in a three-day conference convened for that purpose, are frequently cited in the literature on software design.
Most of the activities you need to build into your software development cycle take place in the first "half" during the requirements, definition, and design phases before coding begins. These activities will and should elongate this half. Some of this time will be made up in the coding and testing phases where detailed design information will quicken coding and reduce bugs and the need for retesting. Additional measurable time and costs will be saved by your customers after the software is delivered.
Keep in mind this injunction by Nielsen (1993, p. 72):
The least expensive way for usability activities to influence a product is to do as much as possible before design is started, since it will then not be necessary to change the design to comply with usability recommendations. Also, usability work done before the system is designed may make it possible to avoid developing unnecessary features.
Here's a map of the activities that need to take place.
And here's what's involved::
At the very beginning of the project, the first task is to create the User Interface (UI) Team. The leader should be a user interface designer, and the team should include
In many cases a graphic or visual designer would also make a valuable contribution. If a usability engineer is available, certainly s/he would be a strong asset, as well.
If you are implementing knowledge support that involves artificial intelligence, the SME should also be a domain expert who can explicate the rules and relationships in analytical and decisionmaking processes, and a knowledge engineer should be on the team to extract knowledge and construct advisory systems.
And yes, it's permissible for some individuals to wear more than one hat.
Nielsen, in a discussion of participatory design, stresses the importance of having actual performers on the team. "Users often raise questions that the development team has not even dreamed of asking." (1993, p. 88.) He also cautions that on large projects, the pool of performers should be refreshed periodically, since over time they become less representative as they become more involved with the development team (1993, p. 89).
Ideally, the team should be independent, and not under the authority, of the development organization. Alan Cooper explains why clearly:
There is a conflict of interest in the world of software development because the people who build it are also the people who design it. If carpenters designed houses, they would certainly be easier or more interesting to build, but not necessarily better to live in. The architect, besides being trained in the art of what works and what doesn't, is an advocate for the client, for the user. (Cooper 1995, p. 23)
Eventually, we will see a bifurcation in the industry: Designers will design the software and engineers will build it. This is currently considered a luxury by those development shops that haven't realized the fiscal and marketing advantages that come with professional software design. (1995, p. 2f.)
In initial meeting(s), team members should clarify and agree upon their roles. The UI Designer should explain the entire process to them and how it relates to the development process.
Once roles have been clarified, the first task of the team is for members to
This team will then go on to conduct most of the PCD activities for the project.
Traditional software requirement gathering is conducted by business analysts and development staff and focuses on features and functions required by user groups. Its task modeling focuses on identifying the tasks users need to complete, the functional requirements of these tasks, and how data needs to be organized to support the functional requirements.
PCD recognizes that this is insufficient. It devotes more time to understanding performer classes and the contexts surrounding tasks. It captures the performers' language, mental models, and goals as well as their activities, and it studies workflows to simplify and optimize them rather than essentially to replicate them.
There are currently numerous techniques to accomplish these PCD objectives. It's still too early in the development and use of these activities for consensus to have developed on best practices. Just about every paper in Carroll's Scenario-Based Design (1995) volume, for example, presents different models and names for a large group of similar activities. Still, some degree of commonality appears to be emerging from this apparent multiplicity of approaches, and the process underlying them and the deliverables they need to produce is becoming clear.
I find it useful to separate the design activities that need to take place in this stage into two groups: those that investigate work and those that envision work. In addition, as a consequence of envisioning future work, it's possible to begin to differentiate base interface and performance support elements, and also to at least begin to set performance goals.
Investigating work means snooping out not only what users actually do, but how they think about and what they call what they do. Preliminary time is spent on determining the most appropriate user classes, and a user profile is determined to document these characteristics. Then an appropriate sampling of users is observed and questioned so that their current workflows, mental models, and work goals are understood. Often this investigation provides insights into how workflows can be improved. All this is documented and shared among the UI Design Team members.
A well-established methodology for doing this, and one that is frequently cited by developers of alternative approaches, is Contextual Inquiry (CI). CI was designed at Digital Equipment Corporation about a decade ago under the leadership of Dr. Karen Holtzblatt. The principle behind it is that there's no substitute for watching people do what they do where they do it and talking to them about it. Through CI, teams visit workers on location and perceive and articulate environmental and motivational factors as well as the work itself. Performers are encouraged to discuss what really happens, and exceptions, not just what's supposed to happen. After each day of sessions, the team debriefs using "affinity diagrams" that cluster observations by theme. For an overview of the CI process, Holtzblatt recommends "Contextual Inquiry: A Participatory Technique for System Design" ( (Holtzblatt and Jones, 1993). Introductory information is also available at Holtzblatt's and Hugh Beyer's consultancy, incontext enterprises. See especially the Contextual Connection , a forum on contextual techniques.
I like CI because of its concern for maximizing the effectiveness of the always-too-little time available for these investigations, its emphasis on workers being the experts on how they do their work, its recognition of the importance of establishing a partnership between the observers and the observed, and the freedom it provides interviewers within the structure of the interview process. The openness of the technique enables the most important perceptions of those who actually do the work to surface of their own accord. In preparing groups for CI, it's also useful to present them with information on what will take place in the next stage of the design process, envisioning work, so that the group can have a mental picture of the types of information that needs to be captured and use this picture to guide their investigations.
Envisioning work begins with formulating the information elicited through CI so that it can be shared among the UI Design Team and, in the next phase of the development cycle, discussed with developers. As a first step, current work practices are depicted. These representations can be done with words, in which case they are most commonly called scenarios, or with words and pictures, which are called storyboards. Sometimes scenario flowcharts are also useful. Then these scenarios, flowcharts, and storyboards are analyzed for how work might be improved and new sets of them are developed iteratively to create representations of how work will be done with the new software.
These activities are probably the most difficult in the entire PCD design process. "Pity the poor interaction designer," writes Tom Erickson. This person must describe "a set of tasks situated in an environment that is as much cultural and social as it is physical." (1995, p. 45)
There are many sources on scenario building. Carroll (1995) presents over a dozen. Bauersfeld (1993) presents a very clear picture of a simple version of the process in her Chapter 4, and Tognazzini (1992) focuses on the importance of creating vivid characters in scenarios, so that the design team can visualize them and build for them. In his later work (1996) he presents examples of elaborately developed scenarios.
Possibly the most useful information I found on scenarios, though, was by Dr. Karen McGraw. In general, I'm put off by her approach, which strikes me as rigid, mechanical and too highly controlled. But in User-Centered Design (McGraw & Harbison, 1997), her painstaking thoroughness and her relentless attention to detail enable her to paint a picture of scenario building that captures the process extremely well.
McGraw explains, for example, that scenarios should identify
I also found her examples helpful. (McGraw & Harbison, 1997, p. 124 & 128ff).
Weinschenk, Jamar, & Yeo emphasize the importance of use case scenarios that describe precisely how workers will do their work when the new software is in place. They emphasize that these scenarios describe user tasks, not business processes. Frequency information is included, as are critical tasks and exceptions. The authors also suggest creating parallel scenarios: one to describe user actions and another to show precisely what the system does in response (1997, p. 44).
During the analysis portions of these visualizing activities, mental (or conceptual) models underlying work may need to be extrapolated. This is one of the more difficult aspects of analysis. McGraw devotes an entire chapter Chapter 9, Eliciting and Analyzing Domain Concepts to it (1997, p. 243ff.). She advocates
In large projects, she suggests creating a domain dictionary.
Nielsen also acknowledges the difficulty of clarifying the conceptual model, and in a section entitled "Mappings and Metaphors" (1993, p. 127), suggests a number of techniques that are not so complex as McGraw's:
On the one hand, the consensus on the importance of the difficult task of clarifying the conceptual model and employing the right metaphors is compelling. "The most important component to design properly is . . . the user's conceptual model. Everything else should be subordinated to making that model clear, obvious, and substantial. That is almost exactly the opposite of how most software is designed." (Liddle, p. 21)
On the other, my inclination is not to spend an inordinate amount of time clarifying mental models at this phase because the clarification may unfold more easily in the next phase as modeling proceeds. Holding a GUI Design Marathon, described below, is particularly useful for this. Again, the particulars of your project will suggest the right approach.
Alan Cooper, in an article entitled Goal Directed Design, points out that it's also important to clarify the actual goals of performers because development often makes assumptions or assumes metaphors that in fact stand in the way of creating useful software.
These activities all give rise to discussions on benefits that might be achieved through reengineering the way the work is done, and these improvements are documented in scenarios, flowcharts, and storyboards.
Information developed through these visualization activities should be shared with the development organization. Communicating performers' goals and mental models is especially important.
In all likelihood, the user interface won't be able to support desired performance by itself. It will need scaffolding to help workers use it.
This is typically done through the help system. While help is expected and important, support can often be provided more effectively through performance support providing that your development organization has the expertise and staff to develop these functions. If you want to provide modularized CBT, it's likely that your training staff will become involved as well.
One step in this direction is to develop assistants such as Microsoft's wizards to ask performers for specific data and parameters and then use this information to perform functions on the system "automatically." If you choose to do this, the UI Design team will lay out the wizard design, and the appropriate development staff will build it. To learn about the basics of wizards, see Microsoft's Guidelines for Designing Wizards.
Another step in this same direction is to envision guides (like Microsoft's cue cards or training cards) that work more closely with the application than Help. It would be sensible for a documentation person on the UI Team experienced in creating WinHelp to be involved in this process, and then to work with development in implementing them.
Microsoft has taken these activities a step further by creating IntelliSense, through which its programs "'understands' the content of an end-user's actions, recognizes the user's intent, and automatically produces the correct result."(1996, p. 1.) The principles involved are detailed in a Microsoft Office 97 Whitepaper.
The final scenarios and storyboards should indicate performance support elements that will be part of the product.
The time it takes to perform tasks using the current system should be measured. Then improvement possibilities can be discussed and new performance goals can be formulated, or at least begin to be. The entire process is easier for new versions of existing products than for new products and may not be able to be completed for new products until the product is defined in the next phase.
Goals should be prioritized, and it is often appropriate to measure/establish different goals for novice, intermediate, and expert performers. Measurements serve as benchmarks against which to show improvements. Attaining performance goals objectify the superiority of the new system. They also serve as limits in the otherwise endless loop of iterations and improvements. (Hix & Hartson, 1993, p. 222.)
Platform issues must be taken into account as well. If platform(s) have not been determined, capabilities and constraints of various platforms must be factored into the performance goals of the new software. This normally entails consulting technical experts.
Capabilities and constraints of performers must be taken into account as well. Just because a new platform can speed up operations by, say, 600% doesn't mean that speeding them up to this degree will improve human performance. A classic example is a computer system used by a staff taking telephone orders. The old system took several seconds to process each order, and that time was used by the operators to prepare for the next call. The new system was so quick that there was no break between the completion of a transaction and the receipt of the next call. Supervisors thought that this would increase performance. Instead, it decreased it. Operators had come to expect, and valued, these small breaks. When they were taken away, overall productivity went down. What was called for was a determination of the optimum break time between calls and then engineering it, regardless of the capabilities of the system, or even better of giving the operators control over the length of the break or when to take their next calls.
This is the phase in which the product's scope is determined. Platforms, data structures, and data flows are analyzed. Product objectives are established and a functional spec and high level technical spec delineating components and their interaction is created.
The scenarios and storyboards the UI design team has developed are given to the development team and guide the determination of product objectives, which of course take into account platform, time, and resource factors. Differences in visions of the product are discussed and worked out. The same dynamic takes place with performance support development. UI logic, which "orchestrates the relationship among and between the UI and underlying system code and extrinsic support resources." (Gery 1995, p. 75), is worked out by the appropriate developers.
When agreement is reached as to what the product will (and won't) do, task modeling begun in the previous phase is worked out in greater detail. For task clarification, I like the choice of methods presented by Weinschenk, Jamar, & Yeo (1997, p. 27 ff.). They support
The third option is particularly appealing. Picture-oriented techniques, while they have a stigma in some organizations of not looking "professional," have a lot to commend them. They're quick and easy to read because they don't force each reader to create his or her own mental picture from scratch.
The technique used, however, should be determined by the individual(s) performing the task analysis.
User interface mockups of the product are created as early as possible, and feedback is elicited from users. I use the term mockups to refer to sketches on paper (often using sticky notes) and reserve the term prototype for mockups created on the system. There is no industry consensus, however, on the use of these terms. Paper mockups are desirable at this point because users seeing them are not reluctant to change them and sometimes do so radically. Somehow once the mockup is transferred to the system, users are more likely regard it as fixed and less likely to be bold and creative in their feedback.
These mockups can illustrate such things as window types and menu bar designs. Sometimes it's useful to present performers with alternative designs and ask them for their preferences.
At this point, attention is given once again to the performers' mental models and how they mentally organize the work they need to perform. One way to do this is to give performers features and functions on sticky notes and have them sort them into meaningful clusters, name these clusters, and then sort the named groups into higher-level clusters.
A method of creating a UI mockup that has been building popularity in the HCI community is presented in "Bridging User Needs to OO GUI Prototype Via Task Object Design" by Dayton, McFarland, and Kramer (1997). This method, led by the UI Design Team, brings a sampling of performers and developers together in a room for a marathon session that results in the creation of an actual UI model in two or three days. A brief capsule of this approach appears on the web as Participatory GUI Design from Task Models.
Following a UI design marathon, a series of UI walkthroughs takes place based on the developed task analyses and redesigned workflows (and input from the marathon, if you were able to conduct one). Bauersfeld recommends flip-books of drawings to walk users through sequences of screens, showing changes when actions are performed. (1994, p. 98 ff.) Deborah Mayhew recommends giving the subject a picture of a single screen and asking what they would do, then giving them the appropriate next screen in response. (1997, p. 47) In either case, errors, confusion, complexity, and uncertainty are noted and incorporated into improved iterations of the design.
Iteration is the key to this refinement process. Since this early modeling is done on paper, recommendations can be incorporated easily. Cumulative feedback refines the interface.
Beginning in this phase and through the design phase, a style guide is created to ensure the consistency of the interface and the efficiency of the coding effort. In many cases this style guide will already exist and only need to be modified. The style guide shows rules for menus, screens, dialog boxes, messages, etc. This document will save time and money in development and ensure consistency both within the application and among sets of products.
If you do not already have a style guide, a very useful template is given on the CD-ROM that accompanies Weinschenk, Jamar, & Yeo (1997). It presents a series of conventions on structure, interaction, presentation, and Internet/Intranet factors, and can be customized to address the design decisions of your organization.
This is the phase in which detailed product specifications are established. The UI is clarified in detail and communicated to the development team. Iteration and communication are key elements here.
Prototypes are like mockups and storyboards except that they look real and exhibit partial functionality. Progress in their development is made in stages. Vertical (partial features, full functionality) and horizontal (full featured but limited functionality) prototypes are developed, as appropriate, and tested formally in a usability lab. (The usability lab itself, however, does not need to be a formal, dedicated space.) Designing the areas to be tested carefully can provide important performer feedback from small development efforts. (Nielsen 1993, p. 99 ff.)
Each prototype is then tested in a usability lab.
Microsoft has performed yet another service to students of improving software by putting a facsimile of its usability lab online. Especially if you're not already familiar with usability labs, take the virtual tour of Microsoft's usability lab.
Several of the authors already cited devote full chapters to usability testing. Nielsen (1993, p. 165 ff.) is particularly good. Bauersfeld (1994, p. 193 ff.) and Weinschenk, Jamar, & Yeo (1997, p. 105 ff.) are also useful. Many of the pieces in Wiklund (1994) also discuss and present illustrations of usability. Two standard and recognized volumes devoted entirely to usability testing are Rubin (1994) and Dumas & Redish (1993). Both are good handbooks for creating labs. Recognize, however, that a formal lab is not necessary for usability testing (Tognazzini1992, p. 79), and that a lot can be accomplished with simple equipment.
It is often useful to scale the importance of suggested corrections to the interface. Nielsen suggests using five categories of severity ratings (1993, p. 103):
The UI is evaluated with each iteration, the conceptual model is further clarified, and a detailed UI design document is produced. This will specify the
On the basis of feedback from the prototyping/usability sessions, and as the final design of the interface is completed, the style guide can be tweaked, if necessary, to reflect final design decisions.
The final design specification is given to development (and performance support development), as well as to documentation and training. The clarity of the detailed plan can save these support organizations considerable time.
As coding and testing proceed, additional usability testing takes place on the increasingly full-functional design..
You may want to regard "performance" as the final phase in your development cycle, beyond delivery, and continue your usability studies at the customer site and establish benchmarks for future releases (Nielsen 1993, p. 71). You'll certainly want to capture user feedback, and you may also want to capture metrics to establish customer savings in training and implementation costs.
Bauersfeld, P., 1994. Software by Design: Creating People Friendly Software. NY: M&T Books.
Carroll, J., ed., 1995. Scenario-Based Design: Envisioning Work and Technology in System Development. New York: John Wiley & Sons.
Coe, M. 1996. Human Factors for Technical Communicators . NY: John Wiley & Sons.
Cooper, A. "Goal-Directed Design", Dr. Dobbs Journal, September 1996.
Cooper, A., 1995. About Face: The Essentials of User Interface Design. Foster City, CA: IDG Books.
Dayton, T., McFarland, A., Kramer, J. "Bridging User Needs to OO GUI Prototype Via Task Object Design," in Wood, L. & Zeno, R. 1997, User Interface Design: Bridging the Gap From Requirements to Design. Boca Raton, FL: CRC Press.
Dumas, J. and Redish, J., 1993. A Practical Guide to Usability Testing.Norwood, NJ: Aldex.Erickson, T., 1995. "Notes on Design Practice: Stories and Prototypes as Catalysts for Communication" in Carroll, J., ed., 1995. Scenario-Based Design: Envisioning Work and Technology in System Development. New York: John Wiley & Sons.
Gery, G., 1995. Performance Support: Performance-Centered Design. Copyrighted by Gery Associates, Tolland, MA. (413-258-4693)
Gery, G., 1991. Electronic Performance Support Systems. Copyrighted by Gery Associates, Tolland, MA. (413-258-4693)
Hix, D. & Hartson, H., 1993. Developing User Interfaces: Ensuring Usability Through Product and Process. NY: John Wiley & Sons.
Holtzblatt, K. and Jones, S. "Contextual Inquiry: A Participatory Technique for System Design," in Namioka, A. and Schuler, D., 1993. Participatory Design: Principles and Practice. Hillsdale, NJ: Lawrence Earlbaum.
Laurel, B., 1990. The Art of Human-Computer Interface Design. Reading, MA: Addison-Wesley.
Liddle, D. 1996. "Design of the Conceptual Model" in Winograd, T., ed., 1996. Bringing Design to Software. NY: ACM Press.
Mayhew, D., 1997 Managing the Design of the User Interface. ACM SIGCHI 97 tutorial notes.
McGraw, K. & Harbison, K., 1997. User-Centered Requirements: the Scenario-Based Engineering Process. Mahway, NJ: Lawrence Earlbaum.
Microsoft Office 97 Whitepaper, 1996. "IntelliSense in Microsoft Office 97"
Nielsen, J. "Guerilla HCI".
Nielsen, J. 1993. Usability Engineering. Cambridge, MA: AP PROFESSIONAL.
Rubin, J., 1994. New York: Handbook of Usability Testing. NY: John Wiley & Sons.
Tognazzini, B., 1992. Tog on Interface. Reading, MA: Addison Wesley.
Tognazzini, B., 1996. Tog on Software Design. Reading, MA: Addison Wesley.
Weinschenk, S., Jamar, P., & Yeo, 1997. GUI Design Essentials. NY: John Wiley & Sons.
Wiklund, M., 1994. Usability in Practice: How Companies Develop User-Friendly Products. Cambridge, MA: AP PROFESSIONAL.
Winograd, T., ed., 1996. Bringing Design to Software. NY: ACM Press.
Winslow, C. & Bramer, W., 1994. FutureWork:
Putting Knowledge to Work in the Knowledge Economy. NY: The
This article is an expansion of a much briefer piece (1800 words, because of space limitations) that appeared in the May 1997 issue of News & Views, the publication of the Philadelphia Metro Chapter of the Society for Technical Communications.