The design of the user interface requires consideration of various psychological aspects of human
behavior. This report discusses the psychological effects of interface components such as colour and
visual objects. Other aspects which must be considered are the different levels of proficiency users
have. Also an interface must be built to suit the system for which it was developed, it should not add
complication to the achievement of simple tasks since this impacts negatively on users.
This report also covers the characteristics of a good interface. It addresses issues of consistency, appropriateness of design and transparency of different types of user interfaces. Design guides for specific type s of interfaces as well as general guides for elements of the interface (for example forms, reports and dialogues) are discussed. A discussion on some of the consequences of bad user interface design is also included. This discussion shows how a badly designed user interface could lead to productivity losses. The methods and procedures used in the actual design of a user interface are covered. These are discussed in the context of general purpose as well as custom made software. Here the role of prototyping and the use of development tools for generating user interface screens, are covered.
This report also covers Information about user interface design in Trinidad and Tobago obtained by interviewing professionals in the field of computer science. In Trinidad and Tobago, the majority of new software development is done for specialized software applications using fourth generation languages. The options for the user interface are formulated on the basis of the requirements obtained from the user at the analysis stage. As a result of the nature of software industry in Trinidad and Tobago, most user interfaces are developed in-house using screen and menu builders. Newer methods of formulating the user interface requirements, such as prototyping are not in general use.
User interfaces have evolved over time from the rudimentary command interpreter to the complex Graphical User Interfaces (GUIs) available on systems today. Currently the developers of today's main User Interface Management Systems (UIMSs) have released new versions of their products which feature major changes in the user interfaces. With the improvements being made to audio and visual technologies, multimedia applications are becoming more common. Multimedia has had an effect on user interface development since interfaces incorporating sound and animation as a standard feature are now being developed. This report discusses some of the design methodologies adopted by some of the developers of UIMSs. The impact multimedia had on user interface design is also discussed as well as what is expected from future user interface designs.
2.1 Definition of a User Interface
The user interface of a computer system is the component of the system which facilitates interaction between the user and the system. Thus, the user interface must enable two-way communication by providing feedback to the user, as well as functions for entering data needed by the system.
2.2 The Need for a User Interface
When the first computers were introduced in the 1950s, the only people who interacted with computers on a regular basis were highly-trained engineers and scientists in research facilities. The cost and size of these computers made their wide-spread use impractical. At this time, communicating with the computer was a very complex task which required a detailed knowledge of the computer's hardware.
Advances made in technology allowed computers to be made smaller and affordable. As a result of this, and the increase in productivity gained by computers, their use became more widespread. With various people from diverse backgrounds now using computers in everyday life, came the need for a user-friendly interface through which the average person could interact productively with a computer system. This led to the development of various types of user interfaces which catered for different types of users.
2.3 Types of User Interfaces
The major types of user interfaces are:
ii.) Menu-driven interfaces provide the user with a list of options and a simple method of selecting between them. Such a method may involve entering a single letter or a number which represents the option. Examples of various types of menus include bar menus and pull-down menus.
iii.) Direct manipulation interfaces (DMIs) presents users with a model of their information space and users can manipulate their information by direct action. Since these types of interfaces manipulate information by direct action, it is not necessary to issue explicit commands to modify information. The Graphical User Interface (GUI) is the most popular implementation of a DMI. This type of interface makes use of visual objects to implement its model and the user can manipulate these objects via a mouse or another pointing device. User Interface Management Systems (UIMS) are implemented mainly as GUIs so that the interface governs the entire system and not just a single application. GUI's are further discussed in section 4.3.3 later in this report.
iv.) Special purpose interfaces are those which are used to control an embedded computer system (for example, an automatic bank machine). Such interfaces also control systems which combine the use of a general-purpose computer with special hardware and software for implementing the user interface.
3.1 The Power of Visual Communication
What people see influences how they feel and what they understand. Visual information communicates non-verbally but very powerfully. This can be attributed to the emotional cues contained in visuals that motivate, direct or distract. This is shown by the way people tend to describe graphic information with adjectives like "fresh", "pretty", "boring", "conservative" and "wild". The advertising industry has taken advantage of this phenomenon for almost as long as publications have existed.
A study conducted in the early stages of the Macintosh development compared a set of tasks performed on both the Lisa and a MS-DOS based computer [11]. These tasks were actually more complicated on the Lisa, but the subjects in the test perceived them as being as easier because the graphical interface made the tasks more fun. This is only one example of how visuals can motivate people.
3.2 Effects of Colour and Visual Objects
The retina of the human eye contains special cones which respond to stimulation by one of three primary colours, red, green, or blue-violet. Mixing of these colours and variations of their intensities can produce many other colours visible to the human eye. Modern colour photography and colour display technology use this same principle of mixing three basic colours to produce all other visible colours.
Colour can be described as having three physical properties which are hue, saturation (or chroma) and brightness. Hue is the name of a colour, saturation is its intensity and brightness is where it would fall on a scale of dark to light.
Colour has emotional properties that helps to mould a person's opinion on something visual. Colours can be arranged to produce a harmonious effect, or they can be arranged to produce an unpleasant effect. There are many theories which explain both the pleasing and harmonious arrangement of colours and those co-ordinations of colours which clash and are displeasing. These rules are however always subject to an individual's personal taste. Some of the most respected rules are however described here. To make an element in a design stand out from its surroundings, a colour that is definitely lighter or darker than the surroundings should be chosen. Pleasing designs can be made of colours of the same hue, of definitely different but neighbouring hues, or of complementary hues. Designs involving two colours with opponent hues, for example red and green, should not be used because they appear to vibrate as the eye tries to focus on them. The use of bright colours on large areas also produces an unpleasant effect since this use of colour tend to leave opponent after images on the retina [2].
Proper use of colour improves learning. This has been proven in various psychological tests [11]. If colours are well chosen and used in computer applications, they improve marketability of products and give an impression of friendliness. They also help reduce the learning curve for these applications. If they are poorly chosen, they can severely affect usability and create a circus like appearance that can confuse and irritate users.
A metaphor, or analogy, relates two otherwise unrelated things. Metaphors are used in applications to develop the user's conceptual image or model of an application. Using metaphors that are familiar and real-world based allow users to transfer previous knowledge of their work environment to a particular application interface. The best known metaphor is the desktop metaphor where the screen represents a desktop and system entities are represented by folders on that desktop.
The use of visual objects can be made in order to implement a metaphor. A visual object is simply a representation (either verbally described or drawn) of different system entities or actions that can be performed. For example in the desktop metaphor, a folder is an object representing a particular file. Visual objects that are represented pictorially are called icons. A folder is an icon in the desktop metaphor. Icons allow users to easily identify different applications, files associated with different applications or system components.
Visual objects that are three dimensional and animate also have a very powerful effect on users. Objects that animate and perform a particular action after being acted on, give the user a feeling of total control over that action. An example of such an object is the button. If a user clicks on a button via a mouse or another pointing device, he expects that button to be pushed in. If the button does not respond as the user expects, the user thinks that something is wrong. That action of animating a button being depressed reassures the user that the system is functioning properly. The user gains a feeling of control in knowing that by pressing a single button or a combination of buttons results in a particular action being carried out by the computer.
Overall, the advantages of using metaphors and colour in user interfaces are a reduction in learning time for an application, motivation of users to use the application and increased user confidence.
3.3 Types of Users and User Preferences
Many people use computers for many different reasons. These people range from those with little or no skills to those with a great abundance of computer knowledge. Some people who are new to computers are also afraid of using them for a variety of reasons. Different types of interfaces are therefore needed to cater for all types of computer users. A computer user should be able to effectively accomplish any required tasks without having to worry about any interface issues since this leads to a loss in productivity.
Finding an interface to cater for a particular user depends largely on that user's preferences. For example an experienced user may prefer to use a command driven interface whereas a not so experienced user may prefer to use a GUI. Command driven interfaces allow faster interaction with the computer and simplify the input of complex requests. This is why most experienced users prefer them. An inexperienced user however prefers a GUI environment because it is easier to use and adapt to. Inexperienced users are also attracted to GUI's because of their use of colours and visual objects which tend to hold their attention. Inexperienced users may also be overwhelmed by the syntax of the commands they will have to learn before they can use a command driven interface.
4.1 Methods Used in Formulating the User Interface Design
There have been generally been two approaches in formulating the usability requirements or the tasks
which the user interface is expected to perform. These are:
4.1.1 Methodological approach
This approach emphasizes the use of good methods or tools in the design of the user interface. This includes the use of prototyping tools, iterative design techniques and empirical testing [1].
User Interface Prototyping
User interface prototyping involves the simulation of the proposed screen layouts and system
responses before the actual implementation. It is usually performed early in the design process and is
done regularly. This reduces uncertainty and risk regarding interface performance and ease-of-use.
This technique is primarily used in the design of interfaces for custom-made applications. Initially, in
the design phase of the system development life cycle, rudimentary prototypes produced on word
processors can be used, whereas subsequent prototypes could become progressively closer to the
final product. Final implementation of the user interface occurs late in the development cycle so that it
can receive the full benefit of the prototyping process.
User interface prototypes should display the proposed screens in the standard sequence in which they will appear. This is particularly effective for menu-driven interfaces and GUIs. If the interface is operator-driven, the prototype must also accept input from the user and simulate the appropriate functions when they are selected by the user. Refinement of the prototype depends on the usability problems encountered by the end-users. Based on feedback from the user, the prototype may be modified, totally re-designed or accepted.
The tools which are used in user interface prototyping can either be specialized or general-purpose. The following are some of these tools:
General purpose tools:
Specialized tools
There are specialized development aids which are designed specifically for user interface prototyping.
Examples of such tools are ProtoScreens and ADEPT (Advanced Design Environment for Prototyping
with Task Models). These tools usually generate the screen layouts, interface components and
dialogues based on characteristics specified by the designer. For example, ADEPT uses abstract
platform-independent models which provide the designer with a high-level specification of the
interactions required between the user and the system, to perform the proposed tasks. The designer
can then edit these models and translate them into 'concrete' models which contain detail-level
descriptions of the interface objects, their behaviour and the screen layout. These models can then be
implemented on any GUI platform [1].
Prototypes are usually discarded once they are no longer needed. However, if a specialized tool is used to create the prototype, it may be re-used in the actual implementation.
Iterative design
Iterative design techniques involve the production of components of the entire system, in small
increments on a regular basis. Therefore small parts are of the entire system may be delivered every
two to four weeks. This means that the user interface for each of these components must be
repeatedly improved until the final iteration is achieved. Like prototyping, iterative design depends on
feedback from the user when successive versions of the interface are to be formulated. When using
iterative design methodology, these are some of the issues to be considered:
Usability Testing
This type of testing is used in conjunction with user interface prototyping and iterative design
techniques. Basically, it is the evaluation of the interface based on results of experiments involving
feedback from the test-users. There are two basic forms of usability testing:
4.1.2 Training Approach
This approach to user interface design relies on the training and knowledge possessed by the designer. Emphasis is placed on the formulation of a good interface design through the expertise of the person responsible for designing the interface. This training may include various fields such as educational psychology, instructional design, and ergonomics (the scientific study of human comfort). Traditionally, user interfaces have been designed by specialists in the area of computer science. However, at present, the increasing use of professionals trained in human factors, shows that a fairly diverse sub-set of knowledge is necessary for a good interface design. This fact has been recognized by software industry leaders such as Microsoft who now employ psychologists in the design of their user-interfaces. [1]
These methods of formulating the usability requirements. have certain drawbacks associated with them. Prototyping and iterative design depend on the quality of the initial design. If the quality of first iteration or prototype is poor, then a large number of successive iterations may be required to correct the design. Due to the lack of user-feedback in the training approach, it is very difficult to produce a design which can cater for the needs of a large number of users, if this approach is used. As a result of these disadvantages, a combination of these approaches is sometimes used. Such a combination will result in fewer iterations and prototypes since the initial design will contain fewer errors and will solve a larger number of usability problems.
4.2 General Design Guides for User Interface Elements
4.2.1 General Principles of Good User Interface Design
User interface design must take into account the needs, experience and capabilities of the user. User interfaces should be designed so that useful interaction can be developed between the user and the system. The interface must be user friendly and must support the user through every stage of interaction. This can be accomplished by allowing users to develop a conceptual model of how an application should work. The user interface should confirm the conceptual model by providing the outcome users expect for any action. This occurs only when the application model is the same as the users' conceptual model.
Since there are different types of user interfaces, different guidelines will apply specifically to each
design. There are however some general principles which are applicable to all user interface designs
and they are listed as follows:
The interface should be user-driven
Often the goal of an application is to automate what was a paper process. With more people
beginning to use computers to do their work, an interface designer should try to make the transfer to
the computer simple and natural. Applications should be designed to allow users to apply their
previous real-world knowledge of the paper process to the application interface. The design can then
support the users' work environment and goals. Potential users should therefore be involved in the design
process of the user interface in a advisory capacity and their feedback should be incorporated in the user
interface at each stage of its design. [6]
The interface should be consistent
Interface consistency means that system commands and menus should have the same format,
parameters should be passed to all commands in the same way and command punctuation should be
similar. Consistent interfaces reduces learning time since knowledge gained in one command or
application can be applied to other parts of the system. [6]
Consistency throughout an application can be supported by establishing
the following:
Common presentation is concerned mainly with a common appearance of the interface. Users can become familiar with interface components when the visual appearance of these components is consistent and, in some cases, when the location of these components are consistent.
Common interaction deals with the interaction of the user with different interface components. After users can recognize interface components, they can interact with these components. Once interaction techniques associated with each component are consistently supported, users become familiar with these techniques.
A process sequence defines a series of steps to follow when a user wants to perform a particular type of action. A common process sequence will define steps for a particular action which must be supported by all applications in the system. When an application consistently supports a common process sequence, users become familiar with the way to interact with the application.
Common actions provide a language between users and the system so users can understand the meaning and result of actions. For example, when users select the OK action, they are telling the computer they have finished working with a particular entry, selection or window and want to continue with the application.
Interface consistency across applications is also important. Applications should be designed so that commands with similar meanings in different applications can be expressed in the same way.
The interface should avoid modes
Users are in a mode whenever they must cancel what they are doing before they can do something
else or when the same action has different results in different situations. Modes force users to focus
on the way an application works instead of the task to be done. Modes, therefore, interfere with the
user's ability to use his/her conceptual model of how he application should work. It is not always
possible to design an application without modes, however when used, they should be made an exception
and limited to the smallest possible scope. Whenever a user is in a mode, it should be made obvious by
providing good visual cues. The method for ending modes should also be easy to learn and remember.
Some types of modes are allowed by the user interface. They are:
Sometimes an application needs information to continue, such as the name of a file into which the user want to save something. When an error occurs, users may be required to perform some action before they can continue their task. The dialogs associated with these events are modal dialogs.
Users are in a spring-loaded mode when they continually take some action that keeps them in the mode. For example, users are in a spring-loaded mode when they drag the mouse with a mouse button pressed to highlight a portion of text. Here, the visual cue for the mode is the highlighting.
If users are in a drawing application, they must be able to choose a tool, such as a pencil or paintbrush, for drawing. After users select the tool, the mouse pointer shape may change to match the selected tool or the tool selection may remain highlighted. This type of mode is called a tool-driven mode because the selection of an application tool puts the user in a mode. Users are in a mode but they are not likely to be confused because the changed mouse pointer or highlighted selection is a constant reminder they are in a mode. [6]
The interface should be transparent
Users should not be made to focus on the mechanics of an application. A good user interface should
not bother the user with mechanics. Users view the computer as a tool for completing tasks and
should not have to know how an application works to get a task done.
A goal of user interface design is to make the user interaction with the computer as simple as possible.
A user interface should be so simple that users are not aware of the tools and mechanisms that make the application work. As applications become more complicated, users should be provided with a simple interface so that they can learn new applications easily.
An application should reflect a real world model of the user goals and the tasks necessary to reach those goals. The user interface should therefore be so intuitive that users can anticipate what to do next by applying their previous knowledge of doing tasks without a computer. One way to provide an intuitive user interface is through the use of metaphors previously discussed in the psychological aspects of interface design of this report.
The interface should include error recovery mechanisms
Users inevitably make mistakes when using a system. The interface design can minimize these
mistakes (for example using menus eliminates typing mistakes) but mistakes can never be completely
eliminated. The interface should therefore provide facilities for recovering from these mistakes. These
can be of two kinds:
Principles that should be followed when designing messages of any type are:
4.2.2 Dialogue Design and Help Styles
A dialogue is defined as communication between two or more different entities. Since the user interface deals with human - computer communication it would not be unreasonable to say that the conversation which occurs in the user interface, is a dialogue. Thus, it is important to have a properly thought out and designed dialogue. The designer must decide whether or not the dialogue is controlled mainly by the user or the computer. The style of help available to the user must also be considered.
Types of Dialogues
There are two types of dialogues:
The most common forms of interaction which represent program-directed dialogues are:
Feedback provided by the dialogue should assist the user in gaining an understanding of the system so that their tasks are made easier. It should also be limited both in scope and content to the action being carried out by the user. In addition it should also minimize the user's need to consult any user manuals or external sources of information, thereby avoiding frequent media switches and confusion of the user. Feedback should also be self explanatory, that is the user should be able to tell what is being done rather than having to guess.
Suitability covers many areas. It deals with the fact that the user should only be able to receive information, and carry out tasks that are applicable to what they are currently doing. To this end, consideration should be given to the context, type, and scope of information to be presented to the user. Terminology used should be consistent and context based rather than dialogue based. Also, input to and output from the dialogue should fit the task at hand.
User control deals with the degree to which the dialogue permits the user to have control over itself. A properly designed dialogue should allow the user to have as much control as is feasible without burdening them with background activities not related to the user task. Whenever input is requested, the dialogue should give the user information about the expected input. The user should be supported when carrying out repetitive actions. If data changes while a task is being carried out, the original must remain accessible, should the user decide to restart for whatever reason.
The user should be able to control the speed at which they interact with the dialogue since they should work at a pace that they are comfortable with. If possible the dialogue should allow the user to control how they proceed within the dialogue, that is, if the task can be accomplished by a number of different ways, the user should be able to do so rather than be forced to conform to an order imposed on them by the dialogue designer. If the amount of data displayed can be controlled for a given task, the user should be able to do so in order to avoid information overload. If alternative dialogues or representations are available for performing a task, the user should be able to select the one that they are most comfortable with since this can improve their productivity.
The application should try as far as possible to prevent the user from making errors. Should any errors be made, they should be explained so that the user may deal with them properly. If the error can be dealt with by the dialogue, then the user must be so informed and then be given the option of letting the dialogue handle the error or override it and take care of the error themselves. If the dialogue has been interrupted for whatever reason, the user should be able to determine the point of restart when the dialogue is resumed, should the task permit. Error correction, wherever possible, should not change the status of the dialogue.
No one is capable of exploiting an application's capabilities from the first time that they use it. There is a period of learning involved as the user gets accustomed to the application and what it can do. A properly designed dialogue should not only allow for learning but support it as well. Dialogues used for similar purposes should be similar in appearance to enable the user to develop common problem solving procedures. Also, wherever possible, the user should be able to incorporate their vocabulary in establishing an individual naming system for objects. If there are any rules and underlying concepts which are useful for learning, these should be available to the user so that they can build up their own grouping strategies and rules for memorizing activities. If there are any relevant learning strategies existing then they should be supported.
There should also be the provision for a user to relearn an application that they have not used in some time. In addition, there should be a number of alternatives to help the user familiarize himself with the dialogues so that the user can pick the one that they are most comfortable with.
The design process
There are a number of guidelines which should be followed when designing dialogues. The dialogue
system should try to cater to the expected type of user and the programmer should avoid the
"perpetual beginner" scenario when designing (with extensive guidelines and explicit instructions)
since this usually annoys more users than it can possibly help.
The dialogue, where possible, must support and enhance the user's perceptual model of the system since this often affects how they work. A consistent conceptual model will ensure that the user is not adversely affected by having to work in an environment to which they are not suited.
Also important is the degree of freedom given to the user by the dialogues. This depends on whether the dialogue is program or operator directed. For example, the dialogue depicted in figure 4.2.2.1 is an example of a program-directed dialogue which uses the form-filling metaphor. This dialogue is appropriate for the purpose of information collection, since it limits the scope of errors which can be made by the user.
The type of dialogue is dictated by the kind of use to which it will be put. Nothing could be worse for a user than to find themselves working in an environment where the dialogue is inappropriate since this can make their work difficult or even impossible.
When designing a dialogue there are certain elements that must be considered as well. These are: clarity, consistency, performance, and respect for the user [3]. Clarity of dialogue design means that the user is given a clear idea of what they are doing and what is going on without having any uncertainties. For example, the dialogue shown in figure 4.2.2.2 does not do a good job of communicating the nature and cause of the error to the user. In this case, the dialogue should explicitly tell the user what type of error occurred and what caused it.
Help Styles
Since help facilities are an important part of a dialogue, it is worthwhile to look at the kinds of help
facilities available. There are three basic types of help: fixed help, context sensitive help, and data
dependent help. [3]
Fixed help is help that is invoked by pressing some sort of function key. This help appears as page after page of text and the user has to navigate using keys to page up and page down. After this help is terminated the user can return to the same point where they left off in the application.
Context sensitive help is more ambitious since it provides help based on what the user is doing thereby providing help that is more closely geared to the user. This type of help attempts to determine what the user was doing at the time of the request for help. As a result it would display a specific help screen for that action.
Data dependent help enables the user to position the cursor in a data entry field, press a help key and obtain information on what the acceptable forms of input are for that field.
4.2.3 Report and Form Design
Input forms and output reports form part of the user interface since they facilitate communication between the users and the computer system. Their importance is diminishing due to the increasing use of electronic files to communicate transaction data to information systems, and on-line data entry to provide the required input to the system. However their use is still required for communication with external entities (for example invoices for customers) and for system audit purposes. Good design of these interface elements must consider the total costs and overall benefits of each document.
Input Form Design
The data required by an input form depends on the data requirements of the associated application.
The layout of the form should allow the entry of data the data a sequence that is natural to them rather
than one which is required by the system. The following are guidelines should be followed in the
design of input forms:
4.2.4 Heuristics in User Interfaces
'Heuristics' are a problem solving technique in which the most appropriate solution is chosen using rules [13]. Interfaces using heuristics may perform different actions on different data given the same command. A simple example is that the Microsoft Windows file manager moves a file when dragged from one location to another on the same drive. When a file is dragged onto a different drive, the file is copied.
Each new version of software products adds new features and new commands and therefore become increasingly complex. The interface for these therefore becomes more complicated even with the best direct manipulation methods. The result is that users are intimidated by the interface and also find it difficult to locate commands. This makes the use of intelligent interfaces to help guide the user and automate parts of the task increasingly necessary.
Successful use of a heuristic requires that:
4.2.5 Consequences of Bad User Interface Design
Bad user interface design can cause many problems. Not the least of which is unwillingness of people to use it. Bad design can also affect productivity. If it takes the user a long while to accomplish their tasks using an application with a badly designed user interface. Also to be considered is the possible strain that users may face by having to work with an interface that is poorly designed.
Users would be unwilling to use a badly designed interface because they will not feel comfortable with it. As a result they may tend to avoid the application. Another aspect of unwillingness is the fact that the average person tends to resist change. If the application makes it hard for them to make the transition, then they will not accept it. Dialogs with too much and/or complicated wording can scare away novice and intermediate users. Experienced users who simply want to get the job done can be frustrated by a dialog that is too "friendly" thereby impeding their efforts.
Should the user go ahead and use the application despite the poorly designed interface, they will be working under pressure for as long as they use the application. This pressure brought on by the badly designed interface can lead to a loss in productivity since the user is not performing to the their maximum ability.
Bad design can scare away users from the application or even from computers in general if it happens often enough (mostly novice and intermediate users). For example, a cluttered screen can intimidate people, as can menus with many options. Bad designs may also lead users into committing errors. These errors may be trivial or catastrophic. Examples of catastrophic errors are: system crashes, and loss of files.. Examples of trivial errors are: selecting the wrong or inappropriate option, and misinterpreting prompts or requests for input.
From a commercial standpoint a bad user interface design can mean revenue losses because people would refuse to buy a badly designed product. Not only could it cause revenue losses, but it may also give the associated software company a negative reputation which will hurt the company in the future.
4.3 Design Guides for Specific Types of User Interfaces
4.3.1 Command-Driven Interfaces
Command-driven interfaces require the user to type a text command to the system. The command may be a query, the initiation of some sub-system or it may call up a sequence of other commands. Some basic design issues for command line interpreters are discussed below. [14]
Command-driven interfaces do not require much effort in the area of screen design and screen management since the user types everything into the system. A basic black or white background screen with white or black text is normally used.
It is possible to allow a user to combine interface commands to create new command procedures. This is a powerful facility in the hands of experienced users, but it may prove unnecessary for a majority of users with some types of applications.
The designer of a command interface must try to develop meaningful mnemonics and retain brevity to minimize the amount of typing required by the user. As users gain experience, they usually prefer short commands.
A command-driven interface can also be designed to incorporate the redefinition of its commands. The advantage of redefinition is that a command can be brief for experienced users, and expanded for inexperienced users. Users can also redefine commands to suit their personal preferences which leads to better interaction between the users and the system. The disadvantage is that wide-scale redefinition means that users no longer share a common language for communicating with the system.
Command-driven interfaces must be equipped with error handling and message generation routines
since users inevitably make mistakes in typing. Message generation routines are also needed to alert
the user to a change in the system's status. A help facility must also be provided to allow new users to
the system to learn the commands of the system. The advantages of command interfaces are:
4.3.2 Menu Driven Interfaces
Menu-driven interfaces present the user with a list of options from which to select. The user may make
this selection via a keyboard or a pointing device such as a mouse. Selecting an option may initiate a
command (such as 'save' or 'print') or may present the user with a sub-menu which has another list of
options. These lower-level menus are said to be nested inside the menu that activates them. There
are general guidelines which should be followed in the design of menu-driven interfaces [3]. These are
as follows:
Full Screen Menus
These menus usually present the options to the user as a sequential list which occupies the entire
screen. This is usually followed by a message prompting the user to select one of the options. In order
to facilitate quick selection, the options may be numbered or a letter may be used to uniquely identify
each option. In either case, the user selects an option by simply entering the corresponding number or
letter.
Bar and Pull-down Menus
The main options available to the user are presented as pads on a horizontal bar across the screen.
When the user selects one of the pads on this menu, the second-level options are displayed in a pull-
down menu. This type of menu system is primarily used in conjunction with a pointing device.
However, options may also be selected using 'short-cut' key combinations and arrow keys.
Pop-up Menus
These menus usually appear as a box with one of the options already selected. When the user points
to the box with a mouse and presses the mouse button, the other options are displayed in a list. The
user can then select the required option with the mouse. The menu options remain visible only while
the mouse button is depressed. These menus are usually used in the task area of the application. If all
the options cannot be displayed on the menu, the menu may scroll automatically or on command from
the user.
The advantages of menus are:
4.3.3 Graphical User Interfaces (GUI's)
Direct Manipulation
Direct manipulation interfaces were defined previously in section 2.3. These interfaces use real world
based metaphors in their implementation to build the user's conceptual model of the system.
An example of a metaphor is the electronic spreadsheet. Users of a spreadsheet application are under the impression that they are working with a two-dimensional sheet of paper on the computer, just as they would, if they were using a tangible spreadsheet.
Most Graphical User Interfaces use the popular desktop metaphor. Due to their graphical nature, these interfaces can represent options available to the user by the use of visual objects such as icons and buttons. These types of interfaces are usually referred to as object oriented because of their use of visual objects.
Types of Information
In general Graphical User Interface applications present users with two types of information which are
objects and actions. [6]
Objects are the focus of the users' attention. For example, in a word processing application, the users' focus is on the document which is the object they are manipulating. By focusing users' attention on objects, GUI's allow users to concentrate on their work rather than on how the application is performing the task.
Actions modify the properties of an object or manipulate it in some way. Properties are unique characteristics of an object that describe that object. 'Save' and 'Print' are examples of actions that manipulate objects.
Elements of a Graphical User Interface
The main elements of a Graphical User Interface are as follows:
Windows may be tiled or overlapping. Tiled windows occupy a fixed area of the screen and no window can use the space occupied by another window. If one window is enlarged, all other windows are shrinked to maintain the tiled arrangement. Overlapping windows do not occupy a fixed area of the screen. These can be moved around by the user at will. These windows can also be resized without affecting the surrounding windows. Overlapping windows can be obscured partially or wholly by other windows as shown in figure 4.3.3.1. Overlapping windows are more flexible especially when large screen areas are unavailable. Tiled windows, however, are more productive since the entire area of these can be viewed without obstruction. Figure 4.3.3.2 shows an example of tiled windows.
The major design issue surrounding windows is whether to make them tiled, overlapping and whether or not to make them scrolling. This depends mainly on the application being developed. The common approach is to try to model the windows to be consistent with the rest of the interface.
Icons
An icon is a pictorial representation of an object or action. Icons can represent objects that users want
to work on or actions that users want to perform. A unique icon also represents an application when it
is minimized.
Care must be taken when designing icons. The pictures must be carefully drawn so that they are understandable by users. The purpose of the icon must also be clear to users, hence great emphasis must be paid on the choice of picture used in the icon.
Figure 4.3.3.3 shows some icons representing actions in the Microsoft Word application. Both Figures 4.3.3.1 and 4.3.3.2 showed icons representing minimized applications.
Pointers
A pointer is a symbol displayed on the screen that is controlled by a pointing device, such as a mouse.
It is used to point at objects and actions users want to select. The pointer is the tool used to drive the
GUI. Pointers are usually designed in the shape of an arrow to point to different selections. Pointers
can also change shape. This is done to provide feedback to the user. For example, when a long
operation is being performed, the pointer changes to an hour-glass or stopwatch to indicate to the
user that the application is still functional but a specified operation will take some time to complete.
When designing an interface to incorporate shape changeable pointers, emphasis must be paid on the
shape of the pointer to suit the operation or mode the user is put into when the pointer changes
shape. Figure 4.3.3.4 shows an example of a mouse pointer.
Dialog Boxes
A dialog box is a fixed sized moveable window in which users provide information that is required by
an application so that it can perform a user request. Figure 4.3.3.5 illustrates an example of a dialog
box.
Scanned Images
Scanned images refer to the capture, storage and display on a computer of documents created either
manually or on some incompatible technology. Scanning of images is becoming more important to
businesses for archival purposes and ease of duplication. It is also becoming more important to be
able to combine scanned images with active documents for presentation and audit purposes.
Graphical user interfaces give users the ability to perform such a combination and in some cases
even allow users to modify the scanned image.
Processable Graphics
Processable graphics refer to drawings whose components can be recognized and processed as
such by a program. Some examples of applications with processable graphics are Computer Aided
Software Engineering (CASE) tools, Computer Aided Design (CAD) tools and Computer Aided
Engineering (CAE) tools. These applications can recognize different shapes and interconnections of
shapes as valid designs according to the semantics of the application. These applications also allow
users to test their designs where applicable.
Animation and Support for Multimedia
The manipulation of graphics to produce moving images is called animation. Animation was originally
found in game applications. However many businesses now use animation for presentation and
marketing purposes. Animation is also necessary for video conferencing which is used in some
companies. A graphical user interface is the predominant interface for applications that use animation
for the above mentioned purposes.
Multimedia is a means of combining realistic images, full motion video, sound, computer graphics, and text based presentation facilities to provide information in a multisensory format which can be quickly and easily understood. Multimedia is mainly used in applications for education and entertainment. An example of a multimedia application for education is the electronic encyclopedia. When an item is referenced in the encyclopedia, an image of the item is shown along with the animation clip and text.
Porting of documents or files across different applications
Sometimes it is necessary to use a file created in one application in another. In the past this required
explicit code to perform file conversions so that this can be accomplished. This type of operation can
be easily accomplished in a UIMS with a GUI interface. Since a GUI treats all files as objects, it makes
it possible to embed one object within another. When this is done a link to the application that created
the embedded object is made. Therefore whenever the embedded object is to be manipulated, the
application that created that object is executed as a process separate from the application creating the
whole document.
Advantages and Disadvantages of Graphical User Interfaces
Overall, the advantages of GUI's are:
In Trinidad and Tobago, the majority of software development is done in-house. The applications developed are intended for use by a relatively small number of people who are subsequently trained to use these applications. The design of the user interface is done by programmers in an ad-hoc manner and focuses on functionality rather than usability. Generally, structured design methods, such as prototyping, which have been previously discussed are not used. One reason for this is attributable to the lack of experience and training of programmers in the field of user interface design. A common practice adopted by local programmers is to model their user interfaces after those found in general purpose software packages. Development tools such as screen and menu builders are usually used in the actual generation of the source code for the user interface. [9]
There are currently no laws in Trinidad and Tobago to guard against software piracy. Presently it is only illegal to copy locally made software for the purpose of re-sale. However it is not illegal to copy software for the purpose of non-profit distribution. This lack of protection discourages any serious effort on the part of local software development. [9]
6.1 Early User Interfaces
The command line interface was the first interactive user interface. It is derived from the teletypewriters (TTY's) that were used to communicate with mainframes. TTY's were notoriously prone to bottlenecks since commands were sent to the computer over relatively slow serial communication links and once there they had to be decoded. Thus to minimize the bottleneck, commands had to be short which led to very cryptic commands since every keystroke counted. Output was also limited since It was generated typewriter style, one character at a time. [5]
This changed when the TTY gave way to the video display terminals (VDTs). These allowed a cursor to be located anywhere on the screen. This in turn allowed information to be printed anywhere on screen. With this capability of the VDT, a user with a few special keystrokes could enter information anywhere on the screen (within limits), and could go back and update the information or correct mistakes. The electronic VDT gave tremendous advantages over the paper based TTY.
With the increasing power of computers along with improved video display technology, the bottleneck associated with TTY's was eliminated which helped make graphics not only possible, but practical. Practical graphics allowed for a evolutionary change in user interfaces: the graphical user interface (GUI). These GUIs were popular on microcomputers, whereas mainframes which had previously used TTY's used the command line interface (CLI). GUIs can come in many different styles, but they have several features in common. Namely: movable, scaleable windows; icons representing various things; menus; and the use of the mouse as a pointing device.
6.2 The Introduction of Graphical User Interfaces
The first popular GUI was the Apple Macintosh user interface which also served as the Mac's operating system (OS). It evolved from an earlier GUI found on the LISA computer (also made by Apple), which did not survive. Some advanced users did not like this new interface because they preferred the CLI, however it did appeal to casual and inexperienced users because they could interact more easily with the computer.
After the debut of the Mac Operating System (OS) a number of other GUIs appeared on the scene. The Amiga by Commodore Business Machines followed Apple's example and came up with a GUI that was both user interface and operating system. Since these systems were designed from the start to be graphical they were relatively easily implemented. GUIs on other platforms did not have it so easy. These platforms had a long tradition of command line interfaces (CLIs) and the switch to a graphical system caused some problems. These were primarily UNIX machines and IBM Personal Computers (PCs) and compatibles.
The fundamental problem involved with implementing PC GUI was that the operating system, DOS, did not possess many of the necessary building blocks. As a result these resources had to be created and then piled on top of DOS. Because of this they tended to be slow and memory intensive. Despite these problems, there were a number of successful implementations. Chief of these were Microsoft's Windows, Quarterdeck's DESQView, and Digital Research Inc.'s GEM.
These GUIs soon ran afoul of legal trouble from Apple over the so called "look and feel" of their GUIs. As a result, many cosmetic changes which affected how the GUI looked and how users interacted with it had to be made. Microsoft solved its problems by signing a licensing agreement with Apple which allowed them access to some of Apple's technology. A problem with GUIs is that applications written for them may not be consistent. i.e. they did not all operate in a similar manner. Apple dealt with it by providing strict guidelines which had the force of law. Makers of PC GUIs did not have such stringent guidelines, and only Microsoft had anything close to Apple's rules. This lack of strictly applied guidelines not only contributed to the PC's tradition of incompatibilities, but also made it possible for some programmers to substitute a windowing environment for good user friendly program design.
UNIX machines, like DOS machines, were character oriented and as were lacking in the necessary GUI services. In addition to this UNIX machines had a problem which brought back the concerns of the TTY era: UNIX systems allowed displays to be located at any distance from the CPU which brought up problems of communications bottlenecks. Despite this additional problem, GUIs became available for UNIX machines. The most widely used and accepted solution was the X Window system developed at MIT. X Window is not a GUI but rather provided a means by which GUIs could be developed. It provided for the representation of graphical displays as well as for the sending of information about displays, keypresses, mouse commands, etc. between X Window systems.
The major contenders in X Window based GUIs are Open Look by UNIX International and Motif by the Open Software Foundation (OSF). Open Look was originally designed by Sun, working closely with AT&T, to be the GUI for UNIX System V release 4 (SVR4). It has some problems however in that it is not fully X Windows compatible since it relies heavily on the Sun architecture. While Open Look has some similarities to other GUIs, there are a number of differences; such as being able to hold a menu on screen with a pushpin and the way in which the mouse (a three button model) is used.
6.3 Reconceptualising the Graphical User Interface
Users today are clamoring for more functionality from their GUIs. They are calling for GUIs that offer such features as: Plug and Play (like being able to plug in a peripheral device and get it to work, without having to endure the suffering usually associated with such an undertaking), increased responsiveness, stability, multitasking, and multimedia.
The process of satisfying these needs must not merely be an adding on of features, but will require a reconceptualisation of the GUI. Admittedly, despite their flashy dressing, most GUIs are basically copies of the original Mac user interface. Because of machine constraints, the early Mac designers were forced to make a number of compromises. In copying the Mac interface, the other companies also copied the restrictions that were placed on the early Mac (a 128K machine with a single small disk drive). It was impossible to have both the program disk and data disk in the computer at the same time. The user had to make explicit saves rather than let the computer automatically do it for them. Now, despite having computers with massive hard drives and lots of memory, windowing systems still require you to make explicit saves. [1]
Yet another reason for reconceptualising the GUI is that they have not significantly advanced the state of computing since they were first introduced. The concepts upon which GUIs are based originated in the seventies with the Xerox Star, the first machine with a graphically oriented interface, which was the inspiration for Apple's Lisa and later on the Macintosh. The original designers came up with the concept of icons and the desktop metaphor, but they saw this as merely being a starting point for advances in user interfaces. [1]
With the advances that are being made today, it would seem that their original goal is finally being pursued. The harbinger of change seems to be the development of multimedia.
Multimedia is only practical in an environment provided by a GUI. Thus the desire for multimedia by the consumer has led to advances in hardware and software so that GUIs can provide multimedia of high quality. In addition to spurring these developments in hardware and software, there is also the possibility of creating a multimedia user interface. For example, a person may walk up to an ATM and, using visual recognition technologies, the ATM recognizes the customer and asks what transaction they would like to perform. The customer then says: "I would like to withdraw $200.00 from my account". The ATM, using speech recognition technology understands the customer's request and carries it out. This example is only the beginning of what could be done with this type of technology.
Another step in the development of the user interface is the object oriented interface. This is currently exemplified by an application called Magic Cap by General Magic. Presently it is only available on the Sony Magic Link Personal Digital Assistant (PDA), but it will soon be available for desktop systems. [4]
Magic Cap is an object oriented, multitasking environment. It takes the desktop metaphor beyond its' traditional boundaries. At the highest conceptual level are streets with buildings representing applications, etc. To invoke applications you must go to the appropriate building. At the next lower level is the corridor with rooms branching off. In a room one might see a desktop with objects such as a calculator or calendar, resting on top. These objects are themselves programs and can be launched by clicking on them. Other rooms would have a different layout and allow different actions. Any new software added on may appear as a new object in a room, a new room in the hallway, or even a new building on the streets.
The object oriented nature of Magic Cap allows for a high degree of customization through the use of moveable objects called "stamps", e.g. one could have stamps of a large pair of lips which could be stuck anywhere and when clicked on can play back a pre-recorded voice message. Another example of the use of stamps is in E-Mail. Here stamps could be created with the names and locations of people and stored in a rolodex. When the E-Mail message is ready to be sent, the stamp bearing the appropriate name(s) and location(s) could be copied on to it thereby mailing it to the people concerned. The stamp metaphor provides an easy way to make changes to Magic Cap's virtual world. All it takes to change things to suit oneself is to apply the proper stamps in the right places. [4]
6.4 Future Enhancements to the User Interface
6.4.1 Standardization Issues
The software market is currently dominated by a large number of competing operating systems and applications from different companies. Generally, these products have dissimilar user interfaces. For example, the same function key may perform different functions in applications from different companies. This can even occur in different applications from the same company. The frustration which many users experience from using inconsistent and incompatible applications has led to increasing activity in the area of user interface standards. A design standard is a series of generally stated recommendations for user interfaces, with examples [13]. Standardization occurs when standards are adopted and put into effect. Employer interest in increasing worker productivity has also given impetus to the argument for user interface standardization. The major organizations which are involved in producing and revising standards for user interfaces are: the International Standards Organization (ISO) and the American National Standards Institute (ANSI). Some of the benefits which can be expected from standardization are:
6.4.2 Agent Software
Another advance that will have significant impact is the development of agent technology. These are intelligent programs that are able to perform many tasks that conventional programs simply could not handle or that were very complicated and time consuming [16]. Although it has such great potential, users will initially see it as making life easier. Agents will allow users to better communicate with their computer system as well as with other external entities. Agents are generally proactive rather than reactive. This means that they can be event driven rather than sequential like other programs.
There are three main types of agents: advisory, assistant, and internet. Advisory agents don't carry out tasks but rather aid the user in whatever they may be doing. They learn how you work and are capable of providing advice either on request or when it thinks you may need it. In time it may even be possible to automate the task since the agent will be able to carry out tasks the way that the user had previously.
Assistant agents are more aspiring in their tasks than are advisory agents. They are often required to work with minimal knowledge and user feedback. They are capable of actively going out and retrieving information or performing other similar tasks. Unlike advisory agents, these agents have to be more sophisticated because of the increased responsibility they are required to shoulder.
Internet agents are similar to assistant agents. They differ in that they are primarily used to seek out and retrieve information on the Internet. As a result they are even more intrusive than assistant agents and need to be a bit more sophisticated.
Agents for all their usefulness in aiding human - computer interaction have some problems. Chief of these are security and stifled creativity. Most agents are generally intrusive which raises the question: 'Is that agent really an agent and if it is do I want it roaming around in my system? '. This is important since viruses can act in a manner similar to agents and no one wants a virus entering their system if they can help it. Also, some people may not want an agent entering their system against their wishes. Agents may also stifle creativity because they could take on more and more of the user's responsibilities leaving the user with less and less to do. Should all the bugs be worked out of agents, people may be able to say that the computer is doing what they want rather than what they say, which is often the case now.
6.4.3 Componentware
Another step forward for user interface design is the concept of componentware. What this means is that instead of a user having to buy memory and resource demanding applications with some features that they'll probably never use, they could instead purchase smaller modules or components and make applications which satisfy their needs, without overloading them with things they do not need.
Several approaches have been made to componentware by various companies. Chief of these are Object Linking and Embedding (OLE) by Microsoft, OpenDoc and Taligent by a consortium of companies. OLE and Open Doc despite their overall objective have a number of differences. OLE does not handle multitasking, and is less functional (it is restricted to rectangular areas of the screen, and is limited to one level of nesting). Open Doc on the other hand supports multitasking and event handling, and its functionality is greater than OLE's. OLE however is available right now whereas Open Doc and its companion Taligent are still in the design stages. [4]
With all the advances being made nowadays, the future of user interfaces seems bright. Regardless of which group's approaches win out, the face of user interfaces (as well as the whole computer industry) will never be the same again.
The design of a user interface must focus on the needs of the user rather than the needs of the application. This can only be achieved by incorporating feedback from the user in the design process. Apart from simply carrying out its basic functions, a user interface must interact with the user in a manner which is as user-friendly as possible. Users should also feel that they are in control of the system.
No particular type of user interface is optimal for performing all possible tasks. Each type of interface is best suited for a particular sub-set of tasks. For example command-driven interfaces are more appropriate for system maintenance tasks than other types of interfaces. The designer must determine which type of interface is most appropriate for the type of application.
The user interface consists of various elements. Design of the user interface must take into consideration proper design of each of these elements. The user interface designer must find a balance between the sometimes conflicting attributes of: efficiency, usability and learnabilty, which must all be present in the interface.
Written by:
Marlon Daniel, Maurice Phillip, Marlon Thomas
© 1995. Trademarks and products are registered to their respective owners.
Last updated on: Friday, June 4, 1999