DOOS starts with searching objects, and then describing the responsibilities of these objects. In this view the world can be seen and modeled as a system of collaborating objects. The software is seen as a living thing (anthropomorphic view). When the objects are chosen and defined carefully these can be used again, so reuse of objects is one of the possibilities of the Object Oriented approach.
Compared with the traditional software lifecycle, the Object Oriented lifecycle dedicates more time to design (analysis and design) and less time to implementation and testing (when an Object Oriented programming language is used) according to Wirfs-Brock.
The method is not a sequential one like the waterfall model, the development of the design is evolutionary, and its an incremental and iterative method. This is called 'Round-Trip Gestalt Design': incremental and iterative development through refinement of logical and physical views of the system as a whole.
DOOS is divided into two parts:
Objects and classes are treated as the same throughout the DOOS method.
DOOS provides only one kind of relationships between Classes that is treated as a relationship:
The other mentioned kinds of relationships are only examined to find additional responsibilities and collaborations, and are not treated as real relationships in the DOOS method.
1. Composite classes and objects that compose them
2. Container classes and the elements it contains
For the first type, collaborations between Whole and Part classes will be frequent, the second type may or may not require collaborations between Whole and Parts.
The techniques that are used in the DOOS method are:
There are a few remarks about the DOOS method: the results of applying this method are never final, so reiteration is an important part of the method, the guidelines that the method provides are not rigid guidelines and it is suggested that validation can be achieved at some points of the design by 'Walk-throughs'. A 'Walk-through' means that several valid and invalid execution paths from the real world should be walked-through in the design by the analyst/designer to see if the design meets the requirements.
Identify classes (Activity 1)
Identifying Classes starts with reading and understanding the requirement specification (activity 1.1). A first list of candidate classes is made by extracting the nouns and possibly hidden nouns from the specification (activity 1.2). The following guidelines have to be applied to the found candidate classes (activity 1.3):
After identifying Classes, candidate Abstract Superclasses can be identified and named (activity 1.4) by grouping classes with common attributes. To name these superclasses the next guidelines can be applied:
If this still yields no name the group of Classes has to be discarded. Next try to look for potential missing Classes (activity 1.5) by expanding categories of already identified Classes. It is stated that this is not an easy job, and that only experience makes it easier. After these activities are performed every class has to be recorded on CRC cards, including a description of each Class (activity 1.6).
It is important to keep in mind that this first step is very preliminary and will have to be reiterated some times, the analyst shouldn't decide to throw a candidate class away to soon. It's better to maintain it and perhaps later decide to discard it.
Identify Responsibilities (Activity 2)
The process of finding
responsibilities starts with looking at the requirements
specification (activity 2.1).
Candidate Responsibilities are:
Next the candidate Responsibilities have to be assigned to Classes they belong to (activity 2.2). Applying the next guidelines can do this:
Evenly distribute system intelligence
If it's hard to decide to which Class a Responsibility has to be added, the different possibilities have to be taken into account and examined with a Walk-through. Then choose the most natural or most efficient way. It is also possible to let a problem domain expert do a Walk-Through. For finding additional responsibilities (activity 2.3) three kinds of relationships are especially important to examine:
After this is done the identified Responsibilities can be added to the CRC cards (activity 2.4).
Identify Collaborations (Activity 3)
Classes can fulfill Responsibilities either by performing the necessary computations themselves or by collaborating with other classes. To find Collaborations (activity 3.1) ask the next questions for each Class:
For finding additional Collaborations (activity 3.2) the next relationships have to be examined:
The classes that do not
collaborate with other classes and are not collaborated with,
have to be discarded (activity 3.3). Next the identified
Collaborations can be added to the CRC cards (activity 3.4).
To control if the chosen Collaborations are right a 'Walkthrough'
is suggested (activity 3.5).
With this step the initial exploratory phase is ended, and the
result is a preliminary design which has to be turned into a
solid and reliable design in the following detailed analysis
phase.
Identify Hierarchies (Activity 4)
The first step for identifying Hierarchies is drawing Hierarchy Graphs (activity 4.1) to illustrate the inheritance relationships among Classes.At this stage it has to be identified whether classes are abstract or concrete (activity 4.2). Then Venn Diagrams can be drawn (activity 4.3) to show the shared responsibilities between Classes and Subclasses who inherit from the Superclasses. These Venn Diagrams show whether the Hierarchy is well chosen or not. To construct a good Class Hierarchy (activity 4.4) the next guidelines have to be applied: Model a 'kind-of' hierarchy. Subclasses have to be a 'kind-of' the Superclass and have to inherit all Responsibilities from its Superclasses.
After the Class Hierarchy is built Contracts can be identified (activity 4.5) using the following guidelines:
A way to apply these guidelines is to start with defining Contracts for Classes at the top of the Class Hierarchy and then go down in the Hierarchy to Subclasses. The contracts have to be noted on the CRC cards where every Contract gets a unique number, and Contract cards have to be written (activity 4.6).
Identify Subsystems (Activity 5)
Identifying Subsystems starts with
drawing a complete Collaborations Graph (activity 5.1).
Looking for frequent and complex collaborations between strongly
coupled classes (activity 5.2) can identify subsystems. The next
guidelines have to be applied for identifying Subsystems:
After identifying the Subsystems the patterns of Collaborations have to be simplified (activity 5.3) applying the next guidelines:
The Subsystems are denoted on
Subsystem cards. The CRC cards have to be modified because
Collaborations between Classes have become Collaborations between
Classes and Subsystems.
The changes in Classes, Collaborations and Contracts have to be
recorded both in the Hierarchy Graphs and on the CRC cards. At
this stage it is again suggested to perform a 'Walk-through'
(activity 5.4) to check the design.
Identify Protocols (Activity 6)
The last step of the DOOS method starts with designing the protocols for each class (activity 6.1). Constructing Protocols is done by a refining the Responsibilities into sets of Signatures that maximize the usefulness of classes. The next guidelines have to be used for constructing the protocols:
When this is done a design specification has to be written for each Class (activity 2), each Subsystem (activity 6.3) and each Contract (activity 6.4).
Possible Application Areas Using OOP
Ever wondered if object orientation could live up to its promise of enhanced re-usability, simplified maintenance and increased speed of development? When defining an application, you'll want to spend maximum time on functionality and minimum time on technical aspects of the application. Based on DataFlex 3.0, the first 4GL providing object oriented features, it has the ability to virtually create any type of application, simply because of the re-usable modules that are so inherent in OOP.
OOPs can be implemented in database applications with RDBMS, air traffic control applications with C++ or Visual Basic, web page creation with Java or C++, expert systems utilizing Small Talk and a host of other applications. Why OOP is so popular? Simply its because of the reusability of modules created in one application and then can be modified to fit into another project, hence saving time and effort while reducing errors as less source code will be written anew.
Comparisons with Traditional Methodologies
An excerp from:
Briand, L., C. Bunse, et al. 'An experimental comparison of the maintainability of object-oriented and structured design documents', in Proc. Intl. Conf. on Software Maintenance. IEEE Computer Society Press, 1997.
Abstract: Emp, Maint
Notes: Reports on a student experiment to assess the impact of OO versus structured design (the traditional methodologies in the study guide) and "good" versus "bad" design. (It is not, however, clear that this is a 2x2 factorial design since design quality might have different meanings depending upon the design paradigm). The authors report that "good" OO designs were easier to understand and modify than "bad" designs. Also "bad" structured design was easier to understand than "bad" OO design. No significant differences were detected between "good" OO and "good" structured design or betweeen "good" and "bad" structured design. The authors conclude that OO designs may be more vulnerable to poor quality than traditional methods.
Some Advantages and Disadvantages
How did OOP become the principal paradigm of developers in the 90s? As time goes about, OOP has dramatically replacing classical development methodologies. Why is this? Though this answer might seem quite subjective but the answer could very well lie in the fact that OOP is reusable. Reusing of program modules leads to faster development of software and higher quality softwares. As mentioned in the textbook, OO softwares are easier to maintain because its structure is inherently decoupled. Side effects then, are further minimized and this means less frustrations on the software engineers part. OOP are easier to adapt and easier to scale.
Though all those attributes mentioned earlier are very much sought after by application programmers, OOP has its drawbacks. For one, it potentially leads programmers away from doing a full analysis because programmers can simply take up a module from the vast libraries that exists in OOP. Sometimes, these modules might seem to be suitable for the current project because it did marvelously on the last project, it might be otherwise on the current project. But because of the success earlier, programmers tend to overlook and take it for granted that the module will work. This is a very dangerous affair and could lead to disastrous ventures. Also, less documentation will be the result of OOP as analysis will always be forsaken (depends on individuals but will definitely be forsaken to a certain degree). Therefore documentation too will suffer because most often or not, documentations contain information of the analysis findings and the more intricate part of the software application.
Comparison with Other Life Cycles
According to Fischman and Kemerer,
Structure analysis takes a distinct output-process-output view of requirements. Data are considered separately from the processes that transform the data. Although it is important, system behavior will always play second fiddle in structured analysis. This structured analysis approach makes heavy use of functional decomposition that is partitioning of the data flow diagram. It is actually very hard to make comparisons, as there exists a lot of variations of OOP and the classical methodologies.
Object-oriented programming offers a new and powerful model for writing computer software. Objects are "black boxes" which send and receive messages. This approach speeds the development of new programs, and, if properly used, improves the maintenance, reusability, and modifiability of software.
O-o programming requires a major shift in thinking by programmers, however. The C++ language offers an easier transition via C, but it still requires an OO design approach in order to make proper use of this technology. Smalltalk offers a pure OO environment, with more rapid development time and greater flexibility and power. Java promises much for Web-enabling OO programs.
References:
1. Eckel, B., C++ Inside and Out, McGraw-Hill, 1993.
2. Dewhurst, S.C. and K.T. Stark, Programming in C++, Prentice-Hall, 1995.
4. Rist, R.S. and R. Terwillinger, Object-Oriented Programming in Eiffel, 1995.
5. B, Object-Oriented Software Construction, second edition, Prentice-Hall, 1995.
6. LaLonde, W.R. and J.R. Pugh, Programming in Smalltalk, Prentice-Hall, 1995.
Part B:
I certainly back up that statement blankly. Even as stated on SSADM, the last phase of the methodology is maintenance and review. Maintenance means that even though the software had been delivered to the customer even for quite some time, if there are any bugs or dissatisfactions on the part of the customer, the maintenance crew will have to be on the rescue. This is very important although software companies often overlook it. You must ask why I am so adamant in this. The answer is pretty straightforward if we think logically.
Without reviews and maintenance, the software companys reputation could be easily tarnished. When a certain software failed, customers expect the software company to debug it because they are illiterate or not that familiar with the development of the software. Also, the amount of money that the client had paid would encompass every stage of the development of the software inclusive of review and maintenance. If maintenance is not delivered, no other companies dare to employ this software company ever again to develop software.
Secondly, even if maintenance is not stated in the written contract, it is unethical not to continue on the development phase until the maintenance stage. The logic behind this argument is that the team that developed the software knows best the way and techniques that was employed in building the software. If another team from another company is employed to debug the program, they might have a hard time just to get themselves acquainted with the problematic software. This might mean that more time and finance is spent on maintenance than the program being maintained by its native creators.
Question 2
Rapid prototyping methodologies
1.0 General view on the design process
Various models of the engineering design process have been discussed in many publications. Generally, this process is understood as a kind of cycle of sequenced or parallely executed activities starting usually with goal generation and ending with implementation of the working solution.
In the current project we have used the following definition of the subsequent phases of the design cycle:
Goal generation - specification of requirements,
Model building and validation,
Synthesis of appropriate technical solutions,
Computer based evaluation of the resulting system,
Prototype (experimental) implementation of tested solutions,
Final practical implementation.
It is important to note that the design process has highly iterative nature, so the subsequent phases are usually repeated as long the actual requirements (sub-goals) are not met. In case of fails it may occur necessary to return to the preceding phases and to revise the partial solutions already obtained.
In prototyping, requirements are identified together with users following a prototype is quickly assembled. The users will then try the working model and voice out any corrections and amendments needed. Necessary alterations are then done. The users will keep on testing the refined model until it is acceptable.
Prototyping can then be seen as a much-improved form of system investigation and analysis as well as an aid to design. Users will get to try out a working model, rather than an abstract comprehension of what the system will be. As for the analysts, they can better encourage the users participation as well improve their communication between the users.
A successful system development effort must be accompanied by a set of user requirements. However, this is not always the case. Sometimes, information gathering can be a laborious and tedious job, not to mention the amount of errors that can exist in this phase. Also, users are not able to visualize the complete software.
Prototyping can assist the user to visualize how the proposed system will work. A prototype is actually a working or a stripped down version of the model inclusive of sufficient functionality. It has the ability to set a "live" setting. Prototyping could involve addition of new capabilities in the later stage.
The Prototyping Process
Prototyping can be seen as an iterative process as shown on the diagram the leaf over. It has it beginnings in an initial prototype, based solely on the requirements / deterministic stage. The prototype is built and updated fairly rapidly, usually in the matter of weeks or months depending on the size of the prototype. The users work with the prototype. Refinements to the prototype will be made gradually when there is request for it by the users. This process will have to continue and repeated a few times until the user is satisfied. Sometimes, the prototype can actually be placed in the production.
CASE STUDY Intel Corporation.
One Monday in April 1970, Intel Corp, the largest chipmaker for personal computers, decided to further computerize its manufacturing operations. The option was a natural one as the Chief Executive Officer had noticed the rapid development of silicon as the new tool over steel. Therefore he had instructed the MIS Manager to look at outsource as an alternative for the required software to integrate into the manufacturing process. The MIS Manager then submitted a tender for a few renowned vendors to bid for it. At long last, after much selection criteria had been taken into consideration, the MIS Manager decided to award the project to CSA for the hardware setup and to Arthur Anderson to develop the software.
During the talks to formulate an implementation plan and to synchronize both companies standards of implementation, a lot of matters were brought up. Some of it including which software would be used by CSA so that Arthur Anderson will be able to purchase what hardware. Other matters that did arise were the projected cost and also the methods of development of the systems. Of all the methodologies considered, only prototyping sounds more familiar to all parties. Also, prototyping will be able to show the results much earlier; thus able to exhibit the results of the development much earlier to Intel Corp. Intel Corporations MIS Manager, Mr. Howard Matthews also approved of this.
There were a few prototype models open for considerations. After much work, there were three that was short-listed for final consideration. Among them are:
Mapie Works
Congenial Software
Skunk Prototype Model
These three are all actually very accomplished prototypes and each has its own strengths and merits. Unfortunately, all good products must have some drawbacks. Mapie Works is excellent in developing live models and designing models in it is simple as it has the most user-friendly GUI (graphical user interface) of all the three reviewed prototypes. Unfortunately, the end product will be very hungry for resources and will not run on anything slower than a Pentium II 300MHz with 128 MB of RAMs. Also, it requires specialized 3D graphic cards with OpenGL chipset. This really set the model back.
Congenial Software on the other hand is not as hardware demanding as Mapie Works model. Unfortunately, the GUI is really a big letdown and the processing speed is really down the drains. It is not what a big companies idea of enhancement.
Skunk Prototype Model is by far the most appealing model. It does not require as much resources to operate as Mapie and it is fairly easy to be learnt up by the Intels employees. Development cost was also kept to a bare minimum because the technology that was implemented into the system is fairly popular and so it is readiliy available in the market. Skunk was finally the winner.
As the development phase generally went by without a hitch, there were some employees especially those who had worked long with Intel refused to co-operate with the third party companies in the fact-finding phase for fear of their jobs being replaced by the software. This contributed to some delays. After being reprimanded by their superiors, did they agree to co-operate. There were some power failures too during the final day of testing which led to a few critical startup files being corrupted.
As prototype is about fulfilling the users needs as full as possible, further refining was done to the software to comply with Intels specific requirements. The system was finally implemented on of 31st December 1972, the start of a new financial year. The project took more than 1½ years to complete because Intel is a very fuzzy clientele. Intel only wanted the best as they are preparing to set new standards for PCs around the world. As Intel paid dearly for the software and all the development costs, they demand the best maintenance and support. CSA and Arthur Anderson has been under an Agreement of Maintenance (AoM) for 10 years. Besides that, further refinements were made as the business environment changed over time. This was stated also in the AoM.
*Note : As there was no apparent Case Studies available, the scenario stated above is purely the imagination of the author. It was created merely to illustrate how a system was developed via Prototyping.
Press To Return To My Main Page