Section 1 - Introduction and Abstract

The design of the user interface requires consideration of various psychological aspects of human behavior. This report discusses the psychological effects of interface components such as colour and visual objects. Other aspects which must be considered are the different levels of proficiency users have. Also an interface must be built to suit the system for which it was developed, it should not add complication to the achievement of simple tasks since this impacts negatively on users.

This report also covers the characteristics of a good interface. It addresses issues of consistency, appropriateness of design and transparency of different types of user interfaces. Design guides for specific type s of interfaces as well as general guides for elements of the interface (for example forms, reports and dialogues) are discussed. A discussion on some of the consequences of bad user interface design is also included. This discussion shows how a badly designed user interface could lead to productivity losses. The methods and procedures used in the actual design of a user interface are covered. These are discussed in the context of general purpose as well as custom made software. Here the role of prototyping and the use of development tools for generating user interface screens, are covered.

This report also covers Information about user interface design in Trinidad and Tobago obtained by interviewing professionals in the field of computer science. In Trinidad and Tobago, the majority of new software development is done for specialized software applications using fourth generation languages. The options for the user interface are formulated on the basis of the requirements obtained from the user at the analysis stage. As a result of the nature of software industry in Trinidad and Tobago, most user interfaces are developed in-house using screen and menu builders. Newer methods of formulating the user interface requirements, such as prototyping are not in general use.

User interfaces have evolved over time from the rudimentary command interpreter to the complex Graphical User Interfaces (GUIs) available on systems today. Currently the developers of today's main User Interface Management Systems (UIMSs) have released new versions of their products which feature major changes in the user interfaces. With the improvements being made to audio and visual technologies, multimedia applications are becoming more common. Multimedia has had an effect on user interface development since interfaces incorporating sound and animation as a standard feature are now being developed. This report discusses some of the design methodologies adopted by some of the developers of UIMSs. The impact multimedia had on user interface design is also discussed as well as what is expected from future user interface designs.

Section 2 - Definition of the Topic

2.1 Definition of a User Interface

The user interface of a computer system is the component of the system which facilitates interaction between the user and the system. Thus, the user interface must enable two-way communication by providing feedback to the user, as well as functions for entering data needed by the system.

2.2 The Need for a User Interface

When the first computers were introduced in the 1950s, the only people who interacted with computers on a regular basis were highly-trained engineers and scientists in research facilities. The cost and size of these computers made their wide-spread use impractical. At this time, communicating with the computer was a very complex task which required a detailed knowledge of the computer's hardware.

Advances made in technology allowed computers to be made smaller and affordable. As a result of this, and the increase in productivity gained by computers, their use became more widespread. With various people from diverse backgrounds now using computers in everyday life, came the need for a user-friendly interface through which the average person could interact productively with a computer system. This led to the development of various types of user interfaces which catered for different types of users.

2.3 Types of User Interfaces

The major types of user interfaces are:

  1. Command-driven interfaces
  2. Menu-driven interfaces
  3. Direct manipulation interfaces (DMI)
  4. Special purpose interfaces.
i.) Command-driven interfaces usually require the user to enter an explicit command which is then interpreted and executed by the system. The command must conform to the syntax rules defined by the system.

ii.) Menu-driven interfaces provide the user with a list of options and a simple method of selecting between them. Such a method may involve entering a single letter or a number which represents the option. Examples of various types of menus include bar menus and pull-down menus.

iii.) Direct manipulation interfaces (DMIs) presents users with a model of their information space and users can manipulate their information by direct action. Since these types of interfaces manipulate information by direct action, it is not necessary to issue explicit commands to modify information. The Graphical User Interface (GUI) is the most popular implementation of a DMI. This type of interface makes use of visual objects to implement its model and the user can manipulate these objects via a mouse or another pointing device. User Interface Management Systems (UIMS) are implemented mainly as GUIs so that the interface governs the entire system and not just a single application. GUI's are further discussed in section 4.3.3 later in this report.

iv.) Special purpose interfaces are those which are used to control an embedded computer system (for example, an automatic bank machine). Such interfaces also control systems which combine the use of a general-purpose computer with special hardware and software for implementing the user interface.

Section 3 - Psychological Aspects of Interface Design

3.1 The Power of Visual Communication

What people see influences how they feel and what they understand. Visual information communicates non-verbally but very powerfully. This can be attributed to the emotional cues contained in visuals that motivate, direct or distract. This is shown by the way people tend to describe graphic information with adjectives like "fresh", "pretty", "boring", "conservative" and "wild". The advertising industry has taken advantage of this phenomenon for almost as long as publications have existed.

A study conducted in the early stages of the Macintosh development compared a set of tasks performed on both the Lisa and a MS-DOS based computer [11]. These tasks were actually more complicated on the Lisa, but the subjects in the test perceived them as being as easier because the graphical interface made the tasks more fun. This is only one example of how visuals can motivate people.

3.2 Effects of Colour and Visual Objects

The retina of the human eye contains special cones which respond to stimulation by one of three primary colours, red, green, or blue-violet. Mixing of these colours and variations of their intensities can produce many other colours visible to the human eye. Modern colour photography and colour display technology use this same principle of mixing three basic colours to produce all other visible colours.

Colour can be described as having three physical properties which are hue, saturation (or chroma) and brightness. Hue is the name of a colour, saturation is its intensity and brightness is where it would fall on a scale of dark to light.

Colour has emotional properties that helps to mould a person's opinion on something visual. Colours can be arranged to produce a harmonious effect, or they can be arranged to produce an unpleasant effect. There are many theories which explain both the pleasing and harmonious arrangement of colours and those co-ordinations of colours which clash and are displeasing. These rules are however always subject to an individual's personal taste. Some of the most respected rules are however described here. To make an element in a design stand out from its surroundings, a colour that is definitely lighter or darker than the surroundings should be chosen. Pleasing designs can be made of colours of the same hue, of definitely different but neighbouring hues, or of complementary hues. Designs involving two colours with opponent hues, for example red and green, should not be used because they appear to vibrate as the eye tries to focus on them. The use of bright colours on large areas also produces an unpleasant effect since this use of colour tend to leave opponent after images on the retina [2].

Proper use of colour improves learning. This has been proven in various psychological tests [11]. If colours are well chosen and used in computer applications, they improve marketability of products and give an impression of friendliness. They also help reduce the learning curve for these applications. If they are poorly chosen, they can severely affect usability and create a circus like appearance that can confuse and irritate users.

A metaphor, or analogy, relates two otherwise unrelated things. Metaphors are used in applications to develop the user's conceptual image or model of an application. Using metaphors that are familiar and real-world based allow users to transfer previous knowledge of their work environment to a particular application interface. The best known metaphor is the desktop metaphor where the screen represents a desktop and system entities are represented by folders on that desktop.

The use of visual objects can be made in order to implement a metaphor. A visual object is simply a representation (either verbally described or drawn) of different system entities or actions that can be performed. For example in the desktop metaphor, a folder is an object representing a particular file. Visual objects that are represented pictorially are called icons. A folder is an icon in the desktop metaphor. Icons allow users to easily identify different applications, files associated with different applications or system components.

Visual objects that are three dimensional and animate also have a very powerful effect on users. Objects that animate and perform a particular action after being acted on, give the user a feeling of total control over that action. An example of such an object is the button. If a user clicks on a button via a mouse or another pointing device, he expects that button to be pushed in. If the button does not respond as the user expects, the user thinks that something is wrong. That action of animating a button being depressed reassures the user that the system is functioning properly. The user gains a feeling of control in knowing that by pressing a single button or a combination of buttons results in a particular action being carried out by the computer.

Overall, the advantages of using metaphors and colour in user interfaces are a reduction in learning time for an application, motivation of users to use the application and increased user confidence.

3.3 Types of Users and User Preferences

Many people use computers for many different reasons. These people range from those with little or no skills to those with a great abundance of computer knowledge. Some people who are new to computers are also afraid of using them for a variety of reasons. Different types of interfaces are therefore needed to cater for all types of computer users. A computer user should be able to effectively accomplish any required tasks without having to worry about any interface issues since this leads to a loss in productivity.

Finding an interface to cater for a particular user depends largely on that user's preferences. For example an experienced user may prefer to use a command driven interface whereas a not so experienced user may prefer to use a GUI. Command driven interfaces allow faster interaction with the computer and simplify the input of complex requests. This is why most experienced users prefer them. An inexperienced user however prefers a GUI environment because it is easier to use and adapt to. Inexperienced users are also attracted to GUI's because of their use of colours and visual objects which tend to hold their attention. Inexperienced users may also be overwhelmed by the syntax of the commands they will have to learn before they can use a command driven interface.

Section 4 - Practical Aspects of Interface Design

4.1 Methods Used in Formulating the User Interface Design

There have been generally been two approaches in formulating the usability requirements or the tasks which the user interface is expected to perform. These are:

  1. The Methodological approach
  2. The Training approach

4.1.1 Methodological approach

This approach emphasizes the use of good methods or tools in the design of the user interface. This includes the use of prototyping tools, iterative design techniques and empirical testing [1].

User Interface Prototyping
User interface prototyping involves the simulation of the proposed screen layouts and system responses before the actual implementation. It is usually performed early in the design process and is done regularly. This reduces uncertainty and risk regarding interface performance and ease-of-use. This technique is primarily used in the design of interfaces for custom-made applications. Initially, in the design phase of the system development life cycle, rudimentary prototypes produced on word processors can be used, whereas subsequent prototypes could become progressively closer to the final product. Final implementation of the user interface occurs late in the development cycle so that it can receive the full benefit of the prototyping process.

User interface prototypes should display the proposed screens in the standard sequence in which they will appear. This is particularly effective for menu-driven interfaces and GUIs. If the interface is operator-driven, the prototype must also accept input from the user and simulate the appropriate functions when they are selected by the user. Refinement of the prototype depends on the usability problems encountered by the end-users. Based on feedback from the user, the prototype may be modified, totally re-designed or accepted.

The tools which are used in user interface prototyping can either be specialized or general-purpose. The following are some of these tools:

General purpose tools:

  1. Fourth-generation languages - These languages are used in conjunction with Database Management Systems (DBMSs) which have screen and application generators. Hence, they can be used to build a small-scale version of the actual application with interactive screens and executable programs. Examples of Fourth-generation languages include Foxpro and Paradox.
  2. Presentation tools - These can be used to demonstrate the successive stages in a dialog between the system and the user. An example of such a tool is Aldus Presentation.
  3. Standard applications - General purpose software such as word-processors and spreadsheets can also serve as prototyping tools. Word-processors with macro and hyper-textual capabilities can effectively simulate a user interface. Any package such as spreadsheets which can be programmed using macros, or some other type of pre-defined language, can produce interactive prototypes.

Specialized tools
There are specialized development aids which are designed specifically for user interface prototyping. Examples of such tools are ProtoScreens and ADEPT (Advanced Design Environment for Prototyping with Task Models). These tools usually generate the screen layouts, interface components and dialogues based on characteristics specified by the designer. For example, ADEPT uses abstract platform-independent models which provide the designer with a high-level specification of the interactions required between the user and the system, to perform the proposed tasks. The designer can then edit these models and translate them into 'concrete' models which contain detail-level descriptions of the interface objects, their behaviour and the screen layout. These models can then be implemented on any GUI platform [1].

Prototypes are usually discarded once they are no longer needed. However, if a specialized tool is used to create the prototype, it may be re-used in the actual implementation.

Iterative design
Iterative design techniques involve the production of components of the entire system, in small increments on a regular basis. Therefore small parts are of the entire system may be delivered every two to four weeks. This means that the user interface for each of these components must be repeatedly improved until the final iteration is achieved. Like prototyping, iterative design depends on feedback from the user when successive versions of the interface are to be formulated. When using iterative design methodology, these are some of the issues to be considered:

  1. Recognize the usability problems of the interface based on feedback.
  2. Once a problem is identified, the interface designer must make changes to correct it. The rationale for the changes should be explicit and should be recorded to form an audit trail. This prevents future changes from sacrificing major usability principles of the interface for a relatively minor gain.
  3. The quality of the interfaces produced from successive iterations depends on the quality of the original. Thus iterative design may only improve problems in a limited range. The better the design to start with, the better the results after iterations.
Even though some of the tools used in prototyping are also used in iterative development, there is an essential difference between these two methods with respect to the user interface. A prototype of a user interface is developed with the intention of discarding it once changes are to be made. However, iterative development involves repeatedly improving the actual user interface until a satisfactory design is achieved.

Usability Testing
This type of testing is used in conjunction with user interface prototyping and iterative design techniques. Basically, it is the evaluation of the interface based on results of experiments involving feedback from the test-users. There are two basic forms of usability testing:

  1. Testing of a completed user interface to determine whether or not the usability requirements for the interface have been achieved. This involves quantitative measurement.
  2. The evaluation of an interface which is still in the process of being designed. The objective of this evaluation is to determine which parts of the interface work properly and which parts have not met usability requirements.
Usability testing may be conducted via several methods. For example, observation of users working on a set of standard tasks, the use of attitude questionnaires and the use of automatic computer detection and logging of the users' actions. From these tests, a list of the usability problems encountered, is produced. These problems are then categorized by their nature and frequency of occurrence. The impact of these problems on user-productivity and satisfaction is determined. Based on these attributes, the problems are prioritized and those with the highest priority are solved in the next iteration or prototype. In some cases solving a problem for one user may actually create unforeseen problems for another. In such cases, a compromise is necessary.

4.1.2 Training Approach

This approach to user interface design relies on the training and knowledge possessed by the designer. Emphasis is placed on the formulation of a good interface design through the expertise of the person responsible for designing the interface. This training may include various fields such as educational psychology, instructional design, and ergonomics (the scientific study of human comfort). Traditionally, user interfaces have been designed by specialists in the area of computer science. However, at present, the increasing use of professionals trained in human factors, shows that a fairly diverse sub-set of knowledge is necessary for a good interface design. This fact has been recognized by software industry leaders such as Microsoft who now employ psychologists in the design of their user-interfaces. [1]

These methods of formulating the usability requirements. have certain drawbacks associated with them. Prototyping and iterative design depend on the quality of the initial design. If the quality of first iteration or prototype is poor, then a large number of successive iterations may be required to correct the design. Due to the lack of user-feedback in the training approach, it is very difficult to produce a design which can cater for the needs of a large number of users, if this approach is used. As a result of these disadvantages, a combination of these approaches is sometimes used. Such a combination will result in fewer iterations and prototypes since the initial design will contain fewer errors and will solve a larger number of usability problems.

4.2 General Design Guides for User Interface Elements

4.2.1 General Principles of Good User Interface Design

User interface design must take into account the needs, experience and capabilities of the user. User interfaces should be designed so that useful interaction can be developed between the user and the system. The interface must be user friendly and must support the user through every stage of interaction. This can be accomplished by allowing users to develop a conceptual model of how an application should work. The user interface should confirm the conceptual model by providing the outcome users expect for any action. This occurs only when the application model is the same as the users' conceptual model.

Since there are different types of user interfaces, different guidelines will apply specifically to each design. There are however some general principles which are applicable to all user interface designs and they are listed as follows:

  1. The interface should be user driven
  2. The interface should be consistent
  3. The interface should avoid modes
  4. The interface should be transparent
  5. The interface should include error recovery mechanisms
  6. The interface should incorporate some form of user guidance

The interface should be user-driven
Often the goal of an application is to automate what was a paper process. With more people beginning to use computers to do their work, an interface designer should try to make the transfer to the computer simple and natural. Applications should be designed to allow users to apply their previous real-world knowledge of the paper process to the application interface. The design can then support the users' work environment and goals. Potential users should therefore be involved in the design process of the user interface in a advisory capacity and their feedback should be incorporated in the user interface at each stage of its design. [6]

The interface should be consistent
Interface consistency means that system commands and menus should have the same format, parameters should be passed to all commands in the same way and command punctuation should be similar. Consistent interfaces reduces learning time since knowledge gained in one command or application can be applied to other parts of the system. [6]

Consistency throughout an application can be supported by establishing the following:

  1. Common presentation
  2. Common interaction
  3. Common process sequence
  4. Common actions

Common presentation is concerned mainly with a common appearance of the interface. Users can become familiar with interface components when the visual appearance of these components is consistent and, in some cases, when the location of these components are consistent.

Common interaction deals with the interaction of the user with different interface components. After users can recognize interface components, they can interact with these components. Once interaction techniques associated with each component are consistently supported, users become familiar with these techniques.

A process sequence defines a series of steps to follow when a user wants to perform a particular type of action. A common process sequence will define steps for a particular action which must be supported by all applications in the system. When an application consistently supports a common process sequence, users become familiar with the way to interact with the application.

Common actions provide a language between users and the system so users can understand the meaning and result of actions. For example, when users select the OK action, they are telling the computer they have finished working with a particular entry, selection or window and want to continue with the application.

Interface consistency across applications is also important. Applications should be designed so that commands with similar meanings in different applications can be expressed in the same way.

The interface should avoid modes
Users are in a mode whenever they must cancel what they are doing before they can do something else or when the same action has different results in different situations. Modes force users to focus on the way an application works instead of the task to be done. Modes, therefore, interfere with the user's ability to use his/her conceptual model of how he application should work. It is not always possible to design an application without modes, however when used, they should be made an exception and limited to the smallest possible scope. Whenever a user is in a mode, it should be made obvious by providing good visual cues. The method for ending modes should also be easy to learn and remember. Some types of modes are allowed by the user interface. They are:

Modal Dialogs
Spring-loaded Modes
Tool-driven Modes

Sometimes an application needs information to continue, such as the name of a file into which the user want to save something. When an error occurs, users may be required to perform some action before they can continue their task. The dialogs associated with these events are modal dialogs.

Users are in a spring-loaded mode when they continually take some action that keeps them in the mode. For example, users are in a spring-loaded mode when they drag the mouse with a mouse button pressed to highlight a portion of text. Here, the visual cue for the mode is the highlighting.

If users are in a drawing application, they must be able to choose a tool, such as a pencil or paintbrush, for drawing. After users select the tool, the mouse pointer shape may change to match the selected tool or the tool selection may remain highlighted. This type of mode is called a tool-driven mode because the selection of an application tool puts the user in a mode. Users are in a mode but they are not likely to be confused because the changed mouse pointer or highlighted selection is a constant reminder they are in a mode. [6]

The interface should be transparent
Users should not be made to focus on the mechanics of an application. A good user interface should not bother the user with mechanics. Users view the computer as a tool for completing tasks and should not have to know how an application works to get a task done.

A goal of user interface design is to make the user interaction with the computer as simple as possible.

A user interface should be so simple that users are not aware of the tools and mechanisms that make the application work. As applications become more complicated, users should be provided with a simple interface so that they can learn new applications easily.

An application should reflect a real world model of the user goals and the tasks necessary to reach those goals. The user interface should therefore be so intuitive that users can anticipate what to do next by applying their previous knowledge of doing tasks without a computer. One way to provide an intuitive user interface is through the use of metaphors previously discussed in the psychological aspects of interface design of this report.

The interface should include error recovery mechanisms
Users inevitably make mistakes when using a system. The interface design can minimize these mistakes (for example using menus eliminates typing mistakes) but mistakes can never be completely eliminated. The interface should therefore provide facilities for recovering from these mistakes. These can be of two kinds:

  1. Confirmation of destructive actions. If a specified action is potentially destructive, the system should prompt the user for confirmation of that action before any information is destroyed.
  2. The inclusion of an undo facility which returns the system to a state before the action occurred. Many levels of undo are useful since users are not always immediately aware of mistakes. In practice, this is expensive to implement and most systems only allow the last command issued to be `undone'.
The interface should incorporate some form of user guidance
User interfaces should have built-in `help' facilities. These should be accessible from a terminal and should provide different levels of help and advice. Help facilities should be structured so that users are not overwhelmed with information when they ask for help.

Principles that should be followed when designing messages of any type are:

  1. Messages should be tailored to the user's context. The user guidance system should be aware of what the user is doing and should alter the output message appropriately if it is context- dependent.
  2. Messages should be tailored to the user's experience level. As users become more familiar with a particular system, they become irritated by long `meaningful' messages. Beginners, however, find it difficult to understand short, brief statements of a problem. The user guidance system should provide both types of messages and should allow the user to control the wordiness of the messages.
  3. Messages should be tailored to the user's skills. Terminology used in messages should be understood by the users of the system. For example, complex programming terms or computer `jargon' should not be used in a system designed for use by a secretarial staff.
  4. Messages should be positive rather than negative. They should use the active rather than the passive voice. Messages should never be insulting or funny.

4.2.2 Dialogue Design and Help Styles

A dialogue is defined as communication between two or more different entities. Since the user interface deals with human - computer communication it would not be unreasonable to say that the conversation which occurs in the user interface, is a dialogue. Thus, it is important to have a properly thought out and designed dialogue. The designer must decide whether or not the dialogue is controlled mainly by the user or the computer. The style of help available to the user must also be considered.

Types of Dialogues

There are two types of dialogues:

  1. Program-directed dialogues
  2. Operator-directed dialogues
Program-directed dialogues
The flow of the dialogue is controlled by the application. The user is directed to enter commands and data as they are required by the system. Each screen presented to the user is fixed in format. [3]

The most common forms of interaction which represent program-directed dialogues are:

Menu systems
The user is given a limited number of choices and is allowed to select one.
The form-filling metaphor
The application displays an on-screen form with text fields and captions indicating what data should be entered in each field. The user then fills in the form standard sequence from left to right and top to bottom.
The program displays a question and asks the user to enter an answer.
Operator-directed dialogues
The application is directed by the user to perform tasks in a sequence which is determined by the user. The occurrence and nature of the current task to be performed is totally determined by the user. The methods commonly used to implement operator-directed dialogues are:
Command languages
The user enters an explicit command to initiate an action.
Direct Manipulation Metaphors
Tasks to be performed are represented as objects. The user manipulates these objects to accomplish tasks.[3]
Standards for dialogue design
A number of standards have been put forward by the International Standards Organization [8]. These may be grouped into several broad categories. These are: feedback, suitability, user control, error handling, and learning.

Feedback provided by the dialogue should assist the user in gaining an understanding of the system so that their tasks are made easier. It should also be limited both in scope and content to the action being carried out by the user. In addition it should also minimize the user's need to consult any user manuals or external sources of information, thereby avoiding frequent media switches and confusion of the user. Feedback should also be self explanatory, that is the user should be able to tell what is being done rather than having to guess.

Suitability covers many areas. It deals with the fact that the user should only be able to receive information, and carry out tasks that are applicable to what they are currently doing. To this end, consideration should be given to the context, type, and scope of information to be presented to the user. Terminology used should be consistent and context based rather than dialogue based. Also, input to and output from the dialogue should fit the task at hand.

User control deals with the degree to which the dialogue permits the user to have control over itself. A properly designed dialogue should allow the user to have as much control as is feasible without burdening them with background activities not related to the user task. Whenever input is requested, the dialogue should give the user information about the expected input. The user should be supported when carrying out repetitive actions. If data changes while a task is being carried out, the original must remain accessible, should the user decide to restart for whatever reason.

The user should be able to control the speed at which they interact with the dialogue since they should work at a pace that they are comfortable with. If possible the dialogue should allow the user to control how they proceed within the dialogue, that is, if the task can be accomplished by a number of different ways, the user should be able to do so rather than be forced to conform to an order imposed on them by the dialogue designer. If the amount of data displayed can be controlled for a given task, the user should be able to do so in order to avoid information overload. If alternative dialogues or representations are available for performing a task, the user should be able to select the one that they are most comfortable with since this can improve their productivity.

The application should try as far as possible to prevent the user from making errors. Should any errors be made, they should be explained so that the user may deal with them properly. If the error can be dealt with by the dialogue, then the user must be so informed and then be given the option of letting the dialogue handle the error or override it and take care of the error themselves. If the dialogue has been interrupted for whatever reason, the user should be able to determine the point of restart when the dialogue is resumed, should the task permit. Error correction, wherever possible, should not change the status of the dialogue.

No one is capable of exploiting an application's capabilities from the first time that they use it. There is a period of learning involved as the user gets accustomed to the application and what it can do. A properly designed dialogue should not only allow for learning but support it as well. Dialogues used for similar purposes should be similar in appearance to enable the user to develop common problem solving procedures. Also, wherever possible, the user should be able to incorporate their vocabulary in establishing an individual naming system for objects. If there are any rules and underlying concepts which are useful for learning, these should be available to the user so that they can build up their own grouping strategies and rules for memorizing activities. If there are any relevant learning strategies existing then they should be supported.

There should also be the provision for a user to relearn an application that they have not used in some time. In addition, there should be a number of alternatives to help the user familiarize himself with the dialogues so that the user can pick the one that they are most comfortable with.

The design process
There are a number of guidelines which should be followed when designing dialogues. The dialogue system should try to cater to the expected type of user and the programmer should avoid the "perpetual beginner" scenario when designing (with extensive guidelines and explicit instructions) since this usually annoys more users than it can possibly help.

The dialogue, where possible, must support and enhance the user's perceptual model of the system since this often affects how they work. A consistent conceptual model will ensure that the user is not adversely affected by having to work in an environment to which they are not suited.

Also important is the degree of freedom given to the user by the dialogues. This depends on whether the dialogue is program or operator directed. For example, the dialogue depicted in figure is an example of a program-directed dialogue which uses the form-filling metaphor. This dialogue is appropriate for the purpose of information collection, since it limits the scope of errors which can be made by the user.

Example of a form filling dialogue.

Each type of dialogue has associated with it particular interactions which have been previously described. Provision of help is easily accomplished since the program is aware of where the user is and the type of help that they are likely to need. Provision of help is not as easily accomplished as for program directed dialogues since the program does not know where the user is or what type of help they would likely need.

The type of dialogue is dictated by the kind of use to which it will be put. Nothing could be worse for a user than to find themselves working in an environment where the dialogue is inappropriate since this can make their work difficult or even impossible.

When designing a dialogue there are certain elements that must be considered as well. These are: clarity, consistency, performance, and respect for the user [3]. Clarity of dialogue design means that the user is given a clear idea of what they are doing and what is going on without having any uncertainties. For example, the dialogue shown in figure does not do a good job of communicating the nature and cause of the error to the user. In this case, the dialogue should explicitly tell the user what type of error occurred and what caused it.

Example of a badly designed dialogue.

Consistency means that similar commands look and act similar regardless of the circumstances under which they are used. Performance means that the user will not have to endure a drawn out series of actions in order to accomplish their tasks, that is, they should be able to do it in the least amount of time. Respect for the user implies that dialogue messages adopt a neutral tone and avoid any "cute" tricks which have a tendency to annoy people rather than help them.

Help Styles
Since help facilities are an important part of a dialogue, it is worthwhile to look at the kinds of help facilities available. There are three basic types of help: fixed help, context sensitive help, and data dependent help. [3]

Fixed help is help that is invoked by pressing some sort of function key. This help appears as page after page of text and the user has to navigate using keys to page up and page down. After this help is terminated the user can return to the same point where they left off in the application.

Context sensitive help is more ambitious since it provides help based on what the user is doing thereby providing help that is more closely geared to the user. This type of help attempts to determine what the user was doing at the time of the request for help. As a result it would display a specific help screen for that action.

Data dependent help enables the user to position the cursor in a data entry field, press a help key and obtain information on what the acceptable forms of input are for that field.

4.2.3 Report and Form Design

Input forms and output reports form part of the user interface since they facilitate communication between the users and the computer system. Their importance is diminishing due to the increasing use of electronic files to communicate transaction data to information systems, and on-line data entry to provide the required input to the system. However their use is still required for communication with external entities (for example invoices for customers) and for system audit purposes. Good design of these interface elements must consider the total costs and overall benefits of each document.

Input Form Design
The data required by an input form depends on the data requirements of the associated application. The layout of the form should allow the entry of data the data a sequence that is natural to them rather than one which is required by the system. The following are guidelines should be followed in the design of input forms:

Report Design
A report is a summary of numerical data which has a standard 'row and column' format. A report may also contain data in graphical format such as graphs and charts. The content of a report should satisfy the data needs of its recipient. The report should be complete and informative without containing redundant or irrelevant information. The following are some typical purposes which reports serve:
If the organization has formal standards for report design, the designer should conform to these. However if there are no such standards the designer should still make the report design consistent with documents from other sub-systems of the application. The general guidelines which should be followed are:
The report should be divided into three main parts:
The page numbers are usually placed at the top of the report and unused space at the bottom of each page may be used to provide instructions.

4.2.4 Heuristics in User Interfaces

'Heuristics' are a problem solving technique in which the most appropriate solution is chosen using rules [13]. Interfaces using heuristics may perform different actions on different data given the same command. A simple example is that the Microsoft Windows file manager moves a file when dragged from one location to another on the same drive. When a file is dragged onto a different drive, the file is copied.

Each new version of software products adds new features and new commands and therefore become increasingly complex. The interface for these therefore becomes more complicated even with the best direct manipulation methods. The result is that users are intimidated by the interface and also find it difficult to locate commands. This makes the use of intelligent interfaces to help guide the user and automate parts of the task increasingly necessary.

Successful use of a heuristic requires that:

Advantages of using heuristics are as follows:
The disadvantages of heuristics are:

4.2.5 Consequences of Bad User Interface Design

Bad user interface design can cause many problems. Not the least of which is unwillingness of people to use it. Bad design can also affect productivity. If it takes the user a long while to accomplish their tasks using an application with a badly designed user interface. Also to be considered is the possible strain that users may face by having to work with an interface that is poorly designed.

Users would be unwilling to use a badly designed interface because they will not feel comfortable with it. As a result they may tend to avoid the application. Another aspect of unwillingness is the fact that the average person tends to resist change. If the application makes it hard for them to make the transition, then they will not accept it. Dialogs with too much and/or complicated wording can scare away novice and intermediate users. Experienced users who simply want to get the job done can be frustrated by a dialog that is too "friendly" thereby impeding their efforts.

Should the user go ahead and use the application despite the poorly designed interface, they will be working under pressure for as long as they use the application. This pressure brought on by the badly designed interface can lead to a loss in productivity since the user is not performing to the their maximum ability.

Bad design can scare away users from the application or even from computers in general if it happens often enough (mostly novice and intermediate users). For example, a cluttered screen can intimidate people, as can menus with many options. Bad designs may also lead users into committing errors. These errors may be trivial or catastrophic. Examples of catastrophic errors are: system crashes, and loss of files.. Examples of trivial errors are: selecting the wrong or inappropriate option, and misinterpreting prompts or requests for input.

From a commercial standpoint a bad user interface design can mean revenue losses because people would refuse to buy a badly designed product. Not only could it cause revenue losses, but it may also give the associated software company a negative reputation which will hurt the company in the future.

4.3 Design Guides for Specific Types of User Interfaces

4.3.1 Command-Driven Interfaces

Command-driven interfaces require the user to type a text command to the system. The command may be a query, the initiation of some sub-system or it may call up a sequence of other commands. Some basic design issues for command line interpreters are discussed below. [14]

Command-driven interfaces do not require much effort in the area of screen design and screen management since the user types everything into the system. A basic black or white background screen with white or black text is normally used.

It is possible to allow a user to combine interface commands to create new command procedures. This is a powerful facility in the hands of experienced users, but it may prove unnecessary for a majority of users with some types of applications.

The designer of a command interface must try to develop meaningful mnemonics and retain brevity to minimize the amount of typing required by the user. As users gain experience, they usually prefer short commands.

A command-driven interface can also be designed to incorporate the redefinition of its commands. The advantage of redefinition is that a command can be brief for experienced users, and expanded for inexperienced users. Users can also redefine commands to suit their personal preferences which leads to better interaction between the users and the system. The disadvantage is that wide-scale redefinition means that users no longer share a common language for communicating with the system.

Command-driven interfaces must be equipped with error handling and message generation routines since users inevitably make mistakes in typing. Message generation routines are also needed to alert the user to a change in the system's status. A help facility must also be provided to allow new users to the system to learn the commands of the system. The advantages of command interfaces are:

The disadvantages of command interfaces are:

4.3.2 Menu Driven Interfaces

Menu-driven interfaces present the user with a list of options from which to select. The user may make this selection via a keyboard or a pointing device such as a mouse. Selecting an option may initiate a command (such as 'save' or 'print') or may present the user with a sub-menu which has another list of options. These lower-level menus are said to be nested inside the menu that activates them. There are general guidelines which should be followed in the design of menu-driven interfaces [3]. These are as follows:

There are three major categories into which menus can be divided:

Full Screen Menus
These menus usually present the options to the user as a sequential list which occupies the entire screen. This is usually followed by a message prompting the user to select one of the options. In order to facilitate quick selection, the options may be numbered or a letter may be used to uniquely identify each option. In either case, the user selects an option by simply entering the corresponding number or letter.

Bar and Pull-down Menus
The main options available to the user are presented as pads on a horizontal bar across the screen. When the user selects one of the pads on this menu, the second-level options are displayed in a pull- down menu. This type of menu system is primarily used in conjunction with a pointing device. However, options may also be selected using 'short-cut' key combinations and arrow keys.

Pop-up Menus
These menus usually appear as a box with one of the options already selected. When the user points to the box with a mouse and presses the mouse button, the other options are displayed in a list. The user can then select the required option with the mouse. The menu options remain visible only while the mouse button is depressed. These menus are usually used in the task area of the application. If all the options cannot be displayed on the menu, the menu may scroll automatically or on command from the user.

The advantages of menus are:

Their disadvantages are:

4.3.3 Graphical User Interfaces (GUI's)

Direct Manipulation
Direct manipulation interfaces were defined previously in section 2.3. These interfaces use real world based metaphors in their implementation to build the user's conceptual model of the system.

An example of a metaphor is the electronic spreadsheet. Users of a spreadsheet application are under the impression that they are working with a two-dimensional sheet of paper on the computer, just as they would, if they were using a tangible spreadsheet.

Most Graphical User Interfaces use the popular desktop metaphor. Due to their graphical nature, these interfaces can represent options available to the user by the use of visual objects such as icons and buttons. These types of interfaces are usually referred to as object oriented because of their use of visual objects.

Types of Information
In general Graphical User Interface applications present users with two types of information which are objects and actions. [6]

Objects are the focus of the users' attention. For example, in a word processing application, the users' focus is on the document which is the object they are manipulating. By focusing users' attention on objects, GUI's allow users to concentrate on their work rather than on how the application is performing the task.

Actions modify the properties of an object or manipulate it in some way. Properties are unique characteristics of an object that describe that object. 'Save' and 'Print' are examples of actions that manipulate objects.

Elements of a Graphical User Interface
The main elements of a Graphical User Interface are as follows:

  1. Windows
  2. Icons
  3. Menus
  4. Pointers
  5. Alerts and Warnings
  6. Dialog Boxes [3, 14, 6]
The first four elements were responsible for the term 'WIMP' being applied to these types of interfaces. An example of a Graphical User Interface is shown in figure

Microsoft Windows Graphical User Interface.

A window is an interface component through which objects and actions are presented to the users. It is an area of the screen and is dedicated to a specific purpose. All messages, programs, icons and dialogs are contained in windows.

Windows may be tiled or overlapping. Tiled windows occupy a fixed area of the screen and no window can use the space occupied by another window. If one window is enlarged, all other windows are shrinked to maintain the tiled arrangement. Overlapping windows do not occupy a fixed area of the screen. These can be moved around by the user at will. These windows can also be resized without affecting the surrounding windows. Overlapping windows can be obscured partially or wholly by other windows as shown in figure Overlapping windows are more flexible especially when large screen areas are unavailable. Tiled windows, however, are more productive since the entire area of these can be viewed without obstruction. Figure shows an example of tiled windows.

Program Manager from Microsoft Windows.

Both tiled and overlapping windows may also be scrolling. When the physical size of a window does not allow all the elements within that window to be displayed, it becomes necessary to be able to move elements currently within the display area of the window out and move the undisplayed elements of the window into the display area. This process is known as scrolling and windows that have this capability are known as scrolling windows. Figure shows an example of a scrolling window (the window that contains the Microsoft Excel application).

The major design issue surrounding windows is whether to make them tiled, overlapping and whether or not to make them scrolling. This depends mainly on the application being developed. The common approach is to try to model the windows to be consistent with the rest of the interface.

An icon is a pictorial representation of an object or action. Icons can represent objects that users want to work on or actions that users want to perform. A unique icon also represents an application when it is minimized.

Care must be taken when designing icons. The pictures must be carefully drawn so that they are understandable by users. The purpose of the icon must also be clear to users, hence great emphasis must be paid on the choice of picture used in the icon.

Figure shows some icons representing actions in the Microsoft Word application. Both Figures and showed icons representing minimized applications.

Microsoft Word Tool Bar.

Menus have been previously defined and discussed. The Graphical User Interface can support all types of menus. The more common types found in this interface are pull-down menus, menu bars, scrolling menus and pop-up menus. The pull-down menus of a Graphical User Interface can also be of a hierarchical or walking type. The same issues discussed in the design of menus in section 4.3.2 apply here as well.

A pointer is a symbol displayed on the screen that is controlled by a pointing device, such as a mouse. It is used to point at objects and actions users want to select. The pointer is the tool used to drive the GUI. Pointers are usually designed in the shape of an arrow to point to different selections. Pointers can also change shape. This is done to provide feedback to the user. For example, when a long operation is being performed, the pointer changes to an hour-glass or stopwatch to indicate to the user that the application is still functional but a specified operation will take some time to complete. When designing an interface to incorporate shape changeable pointers, emphasis must be paid on the shape of the pointer to suit the operation or mode the user is put into when the pointer changes shape. Figure shows an example of a mouse pointer.

Mouse Pointer.

Alerts and Warnings
Alerts and warnings are both forms of modal dialogues. They appear in windows that open automatically to alert the user to any change that the user should be notified of. Since they are forms of modal dialogue, they require the user to take some form of action before the application can continue. The user can choose between overriding the alert or making a correction so that the condition disappears. The user can also suspend the current transaction and start a next one in a new window.

Dialog Boxes
A dialog box is a fixed sized moveable window in which users provide information that is required by an application so that it can perform a user request. Figure illustrates an example of a dialog box.

Example of a Dialog Box.

Processing Capabilities of Graphical User Interfaces
The graphical nature of graphical user interfaces gives them the ability to perform a variety of different operations, including the combination of text and graphics, that were never possible before using one type of interface. Some of these operations are listed below.
  1. What You See Is What You Get (WYSIWYG) editing
  2. Image Scanning
  3. Processable Graphics
  4. Animation and Support for Multimedia
  5. Porting of documents or files across different applications
What You See Is What You Get (WYSIWYG) Editing
WYSIWYG editing refers to the representation of an image on screen as being an exact image the end result (example output on paper). This was previously possible but only with text-based documents done on wordprocessors. Graphical user interfaces take WYSIWYG editing further since the interface screen can now provide reliable images of text and graphic outputs.

Scanned Images
Scanned images refer to the capture, storage and display on a computer of documents created either manually or on some incompatible technology. Scanning of images is becoming more important to businesses for archival purposes and ease of duplication. It is also becoming more important to be able to combine scanned images with active documents for presentation and audit purposes. Graphical user interfaces give users the ability to perform such a combination and in some cases even allow users to modify the scanned image.

Processable Graphics
Processable graphics refer to drawings whose components can be recognized and processed as such by a program. Some examples of applications with processable graphics are Computer Aided Software Engineering (CASE) tools, Computer Aided Design (CAD) tools and Computer Aided Engineering (CAE) tools. These applications can recognize different shapes and interconnections of shapes as valid designs according to the semantics of the application. These applications also allow users to test their designs where applicable.

Animation and Support for Multimedia
The manipulation of graphics to produce moving images is called animation. Animation was originally found in game applications. However many businesses now use animation for presentation and marketing purposes. Animation is also necessary for video conferencing which is used in some companies. A graphical user interface is the predominant interface for applications that use animation for the above mentioned purposes.

Multimedia is a means of combining realistic images, full motion video, sound, computer graphics, and text based presentation facilities to provide information in a multisensory format which can be quickly and easily understood. Multimedia is mainly used in applications for education and entertainment. An example of a multimedia application for education is the electronic encyclopedia. When an item is referenced in the encyclopedia, an image of the item is shown along with the animation clip and text.

Porting of documents or files across different applications
Sometimes it is necessary to use a file created in one application in another. In the past this required explicit code to perform file conversions so that this can be accomplished. This type of operation can be easily accomplished in a UIMS with a GUI interface. Since a GUI treats all files as objects, it makes it possible to embed one object within another. When this is done a link to the application that created the embedded object is made. Therefore whenever the embedded object is to be manipulated, the application that created that object is executed as a process separate from the application creating the whole document.

Advantages and Disadvantages of Graphical User Interfaces
Overall, the advantages of GUI's are:

The disadvantages of GUI's are: In Trinidad and Tobago, there are a number of factors affecting the local development of software and hence local design of user interfaces. These are:
  1. The status of the local software industry
  2. The training and experience levels of local programmers
  3. The absence of appropriate laws governing the software industry
Most commercial enterprises in Trinidad and Tobago that use computers usually prefer to use standard software packages as opposed to developing custom made software. The primary reason for this preference is the lower cost involved in purchasing a general purpose package. If any development has to be done, it is usually done in-house and that company then claims the rights for that piece of software. As a result there is no significant interest by local software professionals in developing general purpose applications for the market. In general, the local demand for software is not large enough to make local software development viable. Local software development would be economical only if access to the international market can be obtained. [12]

In Trinidad and Tobago, the majority of software development is done in-house. The applications developed are intended for use by a relatively small number of people who are subsequently trained to use these applications. The design of the user interface is done by programmers in an ad-hoc manner and focuses on functionality rather than usability. Generally, structured design methods, such as prototyping, which have been previously discussed are not used. One reason for this is attributable to the lack of experience and training of programmers in the field of user interface design. A common practice adopted by local programmers is to model their user interfaces after those found in general purpose software packages. Development tools such as screen and menu builders are usually used in the actual generation of the source code for the user interface. [9]

There are currently no laws in Trinidad and Tobago to guard against software piracy. Presently it is only illegal to copy locally made software for the purpose of re-sale. However it is not illegal to copy software for the purpose of non-profit distribution. This lack of protection discourages any serious effort on the part of local software development. [9]

6.1 Early User Interfaces

The command line interface was the first interactive user interface. It is derived from the teletypewriters (TTY's) that were used to communicate with mainframes. TTY's were notoriously prone to bottlenecks since commands were sent to the computer over relatively slow serial communication links and once there they had to be decoded. Thus to minimize the bottleneck, commands had to be short which led to very cryptic commands since every keystroke counted. Output was also limited since It was generated typewriter style, one character at a time. [5]

This changed when the TTY gave way to the video display terminals (VDTs). These allowed a cursor to be located anywhere on the screen. This in turn allowed information to be printed anywhere on screen. With this capability of the VDT, a user with a few special keystrokes could enter information anywhere on the screen (within limits), and could go back and update the information or correct mistakes. The electronic VDT gave tremendous advantages over the paper based TTY.

With the increasing power of computers along with improved video display technology, the bottleneck associated with TTY's was eliminated which helped make graphics not only possible, but practical. Practical graphics allowed for a evolutionary change in user interfaces: the graphical user interface (GUI). These GUIs were popular on microcomputers, whereas mainframes which had previously used TTY's used the command line interface (CLI). GUIs can come in many different styles, but they have several features in common. Namely: movable, scaleable windows; icons representing various things; menus; and the use of the mouse as a pointing device.

6.2 The Introduction of Graphical User Interfaces

The first popular GUI was the Apple Macintosh user interface which also served as the Mac's operating system (OS). It evolved from an earlier GUI found on the LISA computer (also made by Apple), which did not survive. Some advanced users did not like this new interface because they preferred the CLI, however it did appeal to casual and inexperienced users because they could interact more easily with the computer.

After the debut of the Mac Operating System (OS) a number of other GUIs appeared on the scene. The Amiga by Commodore Business Machines followed Apple's example and came up with a GUI that was both user interface and operating system. Since these systems were designed from the start to be graphical they were relatively easily implemented. GUIs on other platforms did not have it so easy. These platforms had a long tradition of command line interfaces (CLIs) and the switch to a graphical system caused some problems. These were primarily UNIX machines and IBM Personal Computers (PCs) and compatibles.

The fundamental problem involved with implementing PC GUI was that the operating system, DOS, did not possess many of the necessary building blocks. As a result these resources had to be created and then piled on top of DOS. Because of this they tended to be slow and memory intensive. Despite these problems, there were a number of successful implementations. Chief of these were Microsoft's Windows, Quarterdeck's DESQView, and Digital Research Inc.'s GEM.

These GUIs soon ran afoul of legal trouble from Apple over the so called "look and feel" of their GUIs. As a result, many cosmetic changes which affected how the GUI looked and how users interacted with it had to be made. Microsoft solved its problems by signing a licensing agreement with Apple which allowed them access to some of Apple's technology. A problem with GUIs is that applications written for them may not be consistent. i.e. they did not all operate in a similar manner. Apple dealt with it by providing strict guidelines which had the force of law. Makers of PC GUIs did not have such stringent guidelines, and only Microsoft had anything close to Apple's rules. This lack of strictly applied guidelines not only contributed to the PC's tradition of incompatibilities, but also made it possible for some programmers to substitute a windowing environment for good user friendly program design.

UNIX machines, like DOS machines, were character oriented and as were lacking in the necessary GUI services. In addition to this UNIX machines had a problem which brought back the concerns of the TTY era: UNIX systems allowed displays to be located at any distance from the CPU which brought up problems of communications bottlenecks. Despite this additional problem, GUIs became available for UNIX machines. The most widely used and accepted solution was the X Window system developed at MIT. X Window is not a GUI but rather provided a means by which GUIs could be developed. It provided for the representation of graphical displays as well as for the sending of information about displays, keypresses, mouse commands, etc. between X Window systems.

The major contenders in X Window based GUIs are Open Look by UNIX International and Motif by the Open Software Foundation (OSF). Open Look was originally designed by Sun, working closely with AT&T, to be the GUI for UNIX System V release 4 (SVR4). It has some problems however in that it is not fully X Windows compatible since it relies heavily on the Sun architecture. While Open Look has some similarities to other GUIs, there are a number of differences; such as being able to hold a menu on screen with a pushpin and the way in which the mouse (a three button model) is used.

6.3 Reconceptualising the Graphical User Interface

Users today are clamoring for more functionality from their GUIs. They are calling for GUIs that offer such features as: Plug and Play (like being able to plug in a peripheral device and get it to work, without having to endure the suffering usually associated with such an undertaking), increased responsiveness, stability, multitasking, and multimedia.

The process of satisfying these needs must not merely be an adding on of features, but will require a reconceptualisation of the GUI. Admittedly, despite their flashy dressing, most GUIs are basically copies of the original Mac user interface. Because of machine constraints, the early Mac designers were forced to make a number of compromises. In copying the Mac interface, the other companies also copied the restrictions that were placed on the early Mac (a 128K machine with a single small disk drive). It was impossible to have both the program disk and data disk in the computer at the same time. The user had to make explicit saves rather than let the computer automatically do it for them. Now, despite having computers with massive hard drives and lots of memory, windowing systems still require you to make explicit saves. [1]

Yet another reason for reconceptualising the GUI is that they have not significantly advanced the state of computing since they were first introduced. The concepts upon which GUIs are based originated in the seventies with the Xerox Star, the first machine with a graphically oriented interface, which was the inspiration for Apple's Lisa and later on the Macintosh. The original designers came up with the concept of icons and the desktop metaphor, but they saw this as merely being a starting point for advances in user interfaces. [1]

With the advances that are being made today, it would seem that their original goal is finally being pursued. The harbinger of change seems to be the development of multimedia.

Multimedia is only practical in an environment provided by a GUI. Thus the desire for multimedia by the consumer has led to advances in hardware and software so that GUIs can provide multimedia of high quality. In addition to spurring these developments in hardware and software, there is also the possibility of creating a multimedia user interface. For example, a person may walk up to an ATM and, using visual recognition technologies, the ATM recognizes the customer and asks what transaction they would like to perform. The customer then says: "I would like to withdraw $200.00 from my account". The ATM, using speech recognition technology understands the customer's request and carries it out. This example is only the beginning of what could be done with this type of technology.

Another step in the development of the user interface is the object oriented interface. This is currently exemplified by an application called Magic Cap by General Magic. Presently it is only available on the Sony Magic Link Personal Digital Assistant (PDA), but it will soon be available for desktop systems. [4]

Magic Cap is an object oriented, multitasking environment. It takes the desktop metaphor beyond its' traditional boundaries. At the highest conceptual level are streets with buildings representing applications, etc. To invoke applications you must go to the appropriate building. At the next lower level is the corridor with rooms branching off. In a room one might see a desktop with objects such as a calculator or calendar, resting on top. These objects are themselves programs and can be launched by clicking on them. Other rooms would have a different layout and allow different actions. Any new software added on may appear as a new object in a room, a new room in the hallway, or even a new building on the streets.

The object oriented nature of Magic Cap allows for a high degree of customization through the use of moveable objects called "stamps", e.g. one could have stamps of a large pair of lips which could be stuck anywhere and when clicked on can play back a pre-recorded voice message. Another example of the use of stamps is in E-Mail. Here stamps could be created with the names and locations of people and stored in a rolodex. When the E-Mail message is ready to be sent, the stamp bearing the appropriate name(s) and location(s) could be copied on to it thereby mailing it to the people concerned. The stamp metaphor provides an easy way to make changes to Magic Cap's virtual world. All it takes to change things to suit oneself is to apply the proper stamps in the right places. [4]

6.4 Future Enhancements to the User Interface

6.4.1 Standardization Issues

The software market is currently dominated by a large number of competing operating systems and applications from different companies. Generally, these products have dissimilar user interfaces. For example, the same function key may perform different functions in applications from different companies. This can even occur in different applications from the same company. The frustration which many users experience from using inconsistent and incompatible applications has led to increasing activity in the area of user interface standards. A design standard is a series of generally stated recommendations for user interfaces, with examples [13]. Standardization occurs when standards are adopted and put into effect. Employer interest in increasing worker productivity has also given impetus to the argument for user interface standardization. The major organizations which are involved in producing and revising standards for user interfaces are: the International Standards Organization (ISO) and the American National Standards Institute (ANSI). Some of the benefits which can be expected from standardization are:

The major drawback of standardization is its potential to limit innovation in user interface design. It is possible that software developers may be restricted to the specifications in the standard. Another possibility is a tendency to encourage over-reliance on a standard with the preclusion of innovative ideas.

6.4.2 Agent Software

Another advance that will have significant impact is the development of agent technology. These are intelligent programs that are able to perform many tasks that conventional programs simply could not handle or that were very complicated and time consuming [16]. Although it has such great potential, users will initially see it as making life easier. Agents will allow users to better communicate with their computer system as well as with other external entities. Agents are generally proactive rather than reactive. This means that they can be event driven rather than sequential like other programs.

There are three main types of agents: advisory, assistant, and internet. Advisory agents don't carry out tasks but rather aid the user in whatever they may be doing. They learn how you work and are capable of providing advice either on request or when it thinks you may need it. In time it may even be possible to automate the task since the agent will be able to carry out tasks the way that the user had previously.

Assistant agents are more aspiring in their tasks than are advisory agents. They are often required to work with minimal knowledge and user feedback. They are capable of actively going out and retrieving information or performing other similar tasks. Unlike advisory agents, these agents have to be more sophisticated because of the increased responsibility they are required to shoulder.

Internet agents are similar to assistant agents. They differ in that they are primarily used to seek out and retrieve information on the Internet. As a result they are even more intrusive than assistant agents and need to be a bit more sophisticated.

Agents for all their usefulness in aiding human - computer interaction have some problems. Chief of these are security and stifled creativity. Most agents are generally intrusive which raises the question: 'Is that agent really an agent and if it is do I want it roaming around in my system? '. This is important since viruses can act in a manner similar to agents and no one wants a virus entering their system if they can help it. Also, some people may not want an agent entering their system against their wishes. Agents may also stifle creativity because they could take on more and more of the user's responsibilities leaving the user with less and less to do. Should all the bugs be worked out of agents, people may be able to say that the computer is doing what they want rather than what they say, which is often the case now.

6.4.3 Componentware

Another step forward for user interface design is the concept of componentware. What this means is that instead of a user having to buy memory and resource demanding applications with some features that they'll probably never use, they could instead purchase smaller modules or components and make applications which satisfy their needs, without overloading them with things they do not need.

Several approaches have been made to componentware by various companies. Chief of these are Object Linking and Embedding (OLE) by Microsoft, OpenDoc and Taligent by a consortium of companies. OLE and Open Doc despite their overall objective have a number of differences. OLE does not handle multitasking, and is less functional (it is restricted to rectangular areas of the screen, and is limited to one level of nesting). Open Doc on the other hand supports multitasking and event handling, and its functionality is greater than OLE's. OLE however is available right now whereas Open Doc and its companion Taligent are still in the design stages. [4]

With all the advances being made nowadays, the future of user interfaces seems bright. Regardless of which group's approaches win out, the face of user interfaces (as well as the whole computer industry) will never be the same again.

The design of a user interface must focus on the needs of the user rather than the needs of the application. This can only be achieved by incorporating feedback from the user in the design process. Apart from simply carrying out its basic functions, a user interface must interact with the user in a manner which is as user-friendly as possible. Users should also feel that they are in control of the system.

No particular type of user interface is optimal for performing all possible tasks. Each type of interface is best suited for a particular sub-set of tasks. For example command-driven interfaces are more appropriate for system maintenance tasks than other types of interfaces. The designer must determine which type of interface is most appropriate for the type of application.

The user interface consists of various elements. Design of the user interface must take into consideration proper design of each of these elements. The user interface designer must find a balance between the sometimes conflicting attributes of: efficiency, usability and learnabilty, which must all be present in the interface.

Glossary of Terms

Application Generator:
A development tool which generates an application from a design constructed by the user.

A delay caused by an attempt to transmit a large number of messages via a single communication line.

Database Management Systems:
A system that allows a systematic approach to the retrieval and storage of data in a computer system, often coordinating data from a number of database files.

End Users:
People who interact with a particular system on a regular basis.

Fourth Generation Language (4GL):
A language designed for a class or particular type of application usually database management. The objective of these languages is quick development of applications through relatively easy programming.

The use of links within a document which serve as reference points to other documents of the same type.

A sequence of instructions that can be executed by one command or a special keystroke.

The name given to an abbreviated form of a machine instruction.

The simulation of simultaneous execution of multiple tasks by the computer.

Operating System:
A program or suite of programs that control the entire operation of the computer.

Screen Generator:
A utility that generates the code required to display a screen.

User Interface Management System:
These provide a consistent user interface for any number of different applications within the same system.


  1. ACM SIGCHI (1993). INTER-CHI '93 - Building bridges between worlds. ACM Special Interest Group Computer-Human Interaction conference on human- computer interfaces in Amsterdam, The Netherlands.
  2. Collier's Encyclopedia. (1980). Vol.7 p6-7. USA: MacMillan Education Co.
  3. Flaatten, P.O., McCubbrey, D.J., O'Riordan, P.D., Burgess, K. (1992). Foundations of Business Systems. USA: The Dryden Press.
  4. Halfhill, T. (1994). Apple's hightech gamble. Byte Magazine. USA:McGraw. Dec. p50-70.
  5. Hayes, F. (1990). From TTY to VUI. Byte Magazine. USA: McGraw-Hill. April. p205-160.
  6. IBM (1989). Common user access advanced interface design guide. Systems application architecture. SC26-4582-0.
  7. Indermaur, K. (1995). Baby steps. Byte Magazine. USA: McGraw-Hill. Feb. p97-104.
  8. International Standards Organization. (1994). ISO document 9241- Ergonomic requirements for office work with visual display terminals.
  9. Kalicharan, N. Interview with computer science lecturer. St. Augustine, Trinidad, 7th March, 1995.
  10. 0. Miastkowski, S. (1994). IBM engages warp drive for OS/2. Byte Magazine. USA: McGraw-Hill. Nov. p138-139.
  11. Microsoft Corp. (1992). Visual design guides. VisualBasic 2.0 Design Manual. USA: Microsoft Press.
  12. Mitchell, L. interview with software developer. Port-of-Spain, Trinidad, 1st March, 1995.
  13. Reed, P. (1994). ANSI/HFES Software user interface standardization: Critical Issues. SIGCHI Bulletin 26 p12-14.
  14. Sommerville, I. (1992). Software Engineering. USA: Addison-Wesley Publishing Co.
  15. Udell, T. (1994). Exploring Chicago and Daytona. Byte Magazine. USA: McGraw-Hill. Nov. p132-146.
  16. Wagner, P. (1995). Agent Enhanced Communication. Byte Magazine. USA: McGraw-Hill. Feb. p103-104.
  17. Yager, T. (1991). Information's human dimension. Byte Magazine. USA: McGraw-Hill. Dec . p153-160.

Written by:
Marlon Daniel, Maurice Phillip, Marlon Thomas

1995. Trademarks and products are registered to their respective owners.

Entrance About me My links EMail me Some stuff

Last updated on: Friday, June 4, 1999