Question 1:

Part A:

1 Background/Introduction To Object Oriented Methodology

  • According to DOOS (development of object oriented software) the main objective in the Object Oriented approach is to manage the complexity of the real world by using abstraction. The knowledge of the real world is abstracted and encapsulated in objects. The difference between the traditional and the OO approach is that OO talk about the "what" of the system, where the traditional approach talks about the "how" of the system.

    DOOS starts with searching objects, and then describing the responsibilities of these objects. In this view the world can be seen and modeled as a system of collaborating objects. The software is seen as a living thing (anthropomorphic view). When the objects are chosen and defined carefully these can be used again, so reuse of objects is one of the possibilities of the Object Oriented approach.

    Compared with the traditional software lifecycle, the Object Oriented lifecycle dedicates more time to design (analysis and design) and less time to implementation and testing (when an Object Oriented programming language is used) according to Wirfs-Brock.

    The method is not a sequential one like the waterfall model, the development of the design is evolutionary, and it’s an incremental and iterative method. This is called 'Round-Trip Gestalt Design': incremental and iterative development through refinement of logical and physical views of the system as a whole.

  • 2 General Approach

    DOOS is divided into two parts:

    1.  
    2. An initial exploratory phase (Analysis in the software lifecycle): The main topic of the initial exploratory phase is determining the objects, responsibilities and collaborations of objects that play a role in the real world.
    3.  
    4. A detailed analysis phase (First part of Design in the software lifecycle): In the detailed analysis phase the results from the first part are refined and streamlined. At this point a full specification can be made.

     

    3 Concepts and Constructs

    Concepts

    Objects and classes are treated as the same throughout the DOOS method.

    Relationships between Objects

    Relationships between Classes

    DOOS provides only one kind of relationships between Classes that is treated as a relationship:

  • The "is-kind-of" relationship describes a Super/Subclass relationship between Classes. DOOS provides both Single Inheritance (a subclass inherits from only one superclass) and Multiple Inheritance (a subclass inherits from more than one superclass).
  • The other mentioned kinds of relationships are only examined to find additional responsibilities and collaborations, and are not treated as real relationships in the DOOS method.

  • The "is-analogous-to" relationship is examined for finding missing superclasses, when two classes are analogous they might share common responsibilities
  • The "is-part-of" relationship is examined to determine where responsibilities should be placed, either at the Part or at the Whole Class. This relationship is also examined to find collaborations; a 'whole' class may require collaboration(s) with its 'part' classes. This causes a distinction between:

    1. Composite classes and objects that compose them

    2. Container classes and the elements it contains

    For the first type, collaborations between Whole and Part classes will be frequent, the second type may or may not require collaborations between Whole and Parts.

  • With this relationship additional collaborations between classes are determined.
  • Classes that depend upon other classes require either a "has-knowledge-of" relationship with another class, or a third class is needed to form the connection. For example: in a drawing system with elements some elements can be behind or in front of others. It's clear that these elements depend upon the other elements. The choice can be made to give the elements knowledge of other elements, or to create an Elements Info class.
  • Operations and communication

  • DOOS does not make difference between different types of operations. In the DOOS method, communication between Objects is done via sending messages. One object sends a message to another object, the receiver receives the message, performs the requested operation and (possibly) returns some information.
  • 4 Techniques

    The techniques that are used in the DOOS method are:

    1.  
    2. Class-Responsibility-Collaboration card (CRC card) on which a Class with its Super/Subclasses, Responsibilities and Collaborations is denoted.
    3.  
    4. Subsystem cards on which Subsystems, Contracts and Delegations are denoted.
    5.  
    6. Class Hierarchy Graphs to show the classes and their Inheritance hierarchies.
    7.  
    8. Venn Diagrams to examine the chosen Inheritance hierarchies for classes.
    9.  
    10. Collaborations graphs to show Classes, Subsystems and the Client-Server Collaborations between them. The clients and servers for contracts are denoted.

     

    5 Analysis And Design Processes

    There are a few remarks about the DOOS method: the results of applying this method are never final, so reiteration is an important part of the method, the guidelines that the method provides are not rigid guidelines and it is suggested that validation can be achieved at some points of the design by 'Walk-throughs'. A 'Walk-through' means that several valid and invalid execution paths from the real world should be walked-through in the design by the analyst/designer to see if the design meets the requirements.

     

     

     

     

     

     

     

     

     

     

     

    The Initial Exploratory Phase

    Identify classes (Activity 1)

    Identifying Classes starts with reading and understanding the requirement specification (activity 1.1). A first list of candidate classes is made by extracting the nouns and possibly hidden nouns from the specification (activity 1.2). The following guidelines have to be applied to the found candidate classes (activity 1.3):

    After identifying Classes, candidate Abstract Superclasses can be identified and named (activity 1.4) by grouping classes with common attributes. To name these superclasses the next guidelines can be applied:

    If this still yields no name the group of Classes has to be discarded. Next try to look for potential missing Classes (activity 1.5) by expanding categories of already identified Classes. It is stated that this is not an easy job, and that only experience makes it easier. After these activities are performed every class has to be recorded on CRC cards, including a description of each Class (activity 1.6).

    It is important to keep in mind that this first step is very preliminary and will have to be reiterated some times, the analyst shouldn't decide to throw a candidate class away to soon. It's better to maintain it and perhaps later decide to discard it.

    Identify Responsibilities (Activity 2)

    The process of finding responsibilities starts with looking at the requirements specification (activity 2.1).
    Candidate Responsibilities are:

    Next the candidate Responsibilities have to be assigned to Classes they belong to (activity 2.2). Applying the next guidelines can do this:

    Evenly distribute system intelligence

      1.  
      2. Create a new Object as a repository of the information
      3.  
      4. Reassign the responsibility to the Object whose principal responsibility is to maintain the information
      5.  
      6. Collapse the Objects needing the information into one Object

    If it's hard to decide to which Class a Responsibility has to be added, the different possibilities have to be taken into account and examined with a Walk-through. Then choose the most natural or most efficient way. It is also possible to let a problem domain expert do a Walk-Through. For finding additional responsibilities (activity 2.3) three kinds of relationships are especially important to examine:

    After this is done the identified Responsibilities can be added to the CRC cards (activity 2.4).

    Identify Collaborations (Activity 3)

    Classes can fulfill Responsibilities either by performing the necessary computations themselves or by collaborating with other classes. To find Collaborations (activity 3.1) ask the next questions for each Class:

    For finding additional Collaborations (activity 3.2) the next relationships have to be examined:

      1.  
      2. Composite Classes à Collaboration between part and whole
      3.  
      4. Container-element Classes à not always Collaborations between container and element

    The classes that do not collaborate with other classes and are not collaborated with, have to be discarded (activity 3.3). Next the identified Collaborations can be added to the CRC cards (activity 3.4).
    To control if the chosen Collaborations are right a 'Walkthrough' is suggested (activity 3.5).
    With this step the initial exploratory phase is ended, and the result is a preliminary design which has to be turned into a solid and reliable design in the following detailed analysis phase.

    Detailed analysis phase

    Identify Hierarchies (Activity 4)

    The first step for identifying Hierarchies is drawing Hierarchy Graphs (activity 4.1) to illustrate the inheritance relationships among Classes.At this stage it has to be identified whether classes are abstract or concrete (activity 4.2). Then Venn Diagrams can be drawn (activity 4.3) to show the shared responsibilities between Classes and Subclasses who inherit from the Superclasses. These Venn Diagrams show whether the Hierarchy is well chosen or not. To construct a good Class Hierarchy (activity 4.4) the next guidelines have to be applied: Model a 'kind-of' hierarchy. Subclasses have to be a 'kind-of' the Superclass and have to inherit all Responsibilities from its Superclasses.

    After the Class Hierarchy is built Contracts can be identified (activity 4.5) using the following guidelines:

    A way to apply these guidelines is to start with defining Contracts for Classes at the top of the Class Hierarchy and then go down in the Hierarchy to Subclasses. The contracts have to be noted on the CRC cards where every Contract gets a unique number, and Contract cards have to be written (activity 4.6).

    Identify Subsystems (Activity 5)

    Identifying Subsystems starts with drawing a complete Collaborations Graph (activity 5.1).
    Looking for frequent and complex collaborations between strongly coupled classes (activity 5.2) can identify subsystems. The next guidelines have to be applied for identifying Subsystems:

    After identifying the Subsystems the patterns of Collaborations have to be simplified (activity 5.3) applying the next guidelines:

    The Subsystems are denoted on Subsystem cards. The CRC cards have to be modified because Collaborations between Classes have become Collaborations between Classes and Subsystems.
    The changes in Classes, Collaborations and Contracts have to be recorded both in the Hierarchy Graphs and on the CRC cards. At this stage it is again suggested to perform a 'Walk-through' (activity 5.4) to check the design.

    Identify Protocols (Activity 6)

    The last step of the DOOS method starts with designing the protocols for each class (activity 6.1). Constructing Protocols is done by a refining the Responsibilities into sets of Signatures that maximize the usefulness of classes. The next guidelines have to be used for constructing the protocols:

    When this is done a design specification has to be written for each Class (activity 2), each Subsystem (activity 6.3) and each Contract (activity 6.4).

    Possible Application Areas Using OOP

    Ever wondered if object orientation could live up to its promise of enhanced re-usability, simplified maintenance and increased speed of development? When defining an application, you'll want to spend maximum time on functionality and minimum time on technical aspects of the application. Based on DataFlex 3.0, the first 4GL providing object oriented features, it has the ability to virtually create any type of application, simply because of the re-usable modules that are so inherent in OOP.

    OOP’s can be implemented in database applications with RDBMS, air traffic control applications with C++ or Visual Basic, web page creation with Java or C++, expert systems utilizing Small Talk and a host of other applications. Why OOP is so popular? Simply it’s because of the reusability of modules created in one application and then can be modified to fit into another project, hence saving time and effort while reducing errors as less source code will be written anew.

    Comparisons with Traditional Methodologies

    An excerp from:

    Briand, L., C. Bunse, et al. 'An experimental comparison of the maintainability of object-oriented and structured design documents', in Proc. Intl. Conf. on Software Maintenance. IEEE Computer Society Press, 1997.

    Abstract: Emp, Maint

    Notes: Reports on a student experiment to assess the impact of OO versus structured design (the traditional methodologies in the study guide) and "good" versus "bad" design. (It is not, however, clear that this is a 2x2 factorial design since design quality might have different meanings depending upon the design paradigm). The authors report that "good" OO designs were easier to understand and modify than "bad" designs. Also "bad" structured design was easier to understand than "bad" OO design. No significant differences were detected between "good" OO and "good" structured design or betweeen "good" and "bad" structured design. The authors conclude that OO designs may be more vulnerable to poor quality than traditional methods.

    Some Advantages and Disadvantages

    How did OOP become the principal paradigm of developers in the 90s? As time goes about, OOP has dramatically replacing classical development methodologies. Why is this? Though this answer might seem quite subjective but the answer could very well lie in the fact that OOP is reusable. Reusing of program modules leads to faster development of software and higher quality softwares. As mentioned in the textbook, OO softwares are easier to maintain because its structure is inherently decoupled. Side effects then, are further minimized and this means less frustrations on the software engineer’s part. OOP are easier to adapt and easier to scale.

    Though all those attributes mentioned earlier are very much sought after by application programmers, OOP has its drawbacks. For one, it potentially leads programmers away from doing a full analysis because programmers can simply take up a module from the vast libraries that exists in OOP. Sometimes, these modules might seem to be suitable for the current project because it did marvelously on the last project, it might be otherwise on the current project. But because of the success earlier, programmers tend to overlook and take it for granted that the module will work. This is a very dangerous affair and could lead to disastrous ventures. Also, less documentation will be the result of OOP as analysis will always be forsaken (depends on individuals but will definitely be forsaken to a certain degree). Therefore documentation too will suffer because most often or not, documentations contain information of the analysis findings and the more intricate part of the software application.

    Comparison with Other Life Cycles

    According to Fischman and Kemerer,

  • We conclude that the object oriented approach…represents a radical change over process oriented methodologies such as structured analysis, but only an incremental change over data oriented methodologies such as information engineering. Process oriented methodologies focus attention away from the inherent properties of objects during the modeling process and lead to a model of the problem domain that is orthogonal to the three essentials of OO; encapsulation, classification of objects and inheritance.
  • Structure analysis takes a distinct output-process-output view of requirements. Data are considered separately from the processes that transform the data. Although it is important, system behavior will always play second fiddle in structured analysis. This structured analysis approach makes heavy use of functional decomposition that is partitioning of the data flow diagram. It is actually very hard to make comparisons, as there exists a lot of variations of OOP and the classical methodologies.

     

    In Summary

    Object-oriented programming offers a new and powerful model for writing computer software. Objects are "black boxes" which send and receive messages. This approach speeds the development of new programs, and, if properly used, improves the maintenance, reusability, and modifiability of software.

    O-o programming requires a major shift in thinking by programmers, however. The C++ language offers an easier transition via C, but it still requires an OO design approach in order to make proper use of this technology. Smalltalk offers a pure OO environment, with more rapid development time and greater flexibility and power. Java promises much for Web-enabling OO programs.

     

    References:

    1. Eckel, B., C++ Inside and Out, McGraw-Hill, 1993.

    2. Dewhurst, S.C. and K.T. Stark, Programming in C++, Prentice-Hall, 1995.

  • 3. Heinze, W.J. and A.E. Anderson, Object-Oriented Programming and Design Using C++, Prentice-Hall, 1995.

    4. Rist, R.S. and R. Terwillinger, Object-Oriented Programming in Eiffel, 1995.

    5. B, Object-Oriented Software Construction, second edition, Prentice-Hall, 1995.

  • 6. LaLonde, W.R. and J.R. Pugh, Programming in Smalltalk, Prentice-Hall, 1995.

    1.  
    2. Klimas, E. at al, Smalltalk with Style, Prentice-Hall, 1995.
    3.  
    4. www.rbsc.com/pages/ootbib.html
    5.  
    6. www.toa.com
    7.  
    8. http://dec.bournemouth.ac.uk/ESERG/bibliography.html
    9.  
    10. www.sbu.ac.uk/~csse/metkit.html
    11.  
    12. http://louis.ecs.soton.ac.uk/dsse/moops.html
    13.  
    14. http://emory.com/classify.htm
    15.  
    16. Also, the electronic version of this document can be found at my homepage, https://members.tripod.com/~jarofclay/page6.html after 12th March 1998.

     

     

     

     

    Part B:

    I certainly back up that statement blankly. Even as stated on SSADM, the last phase of the methodology is maintenance and review. Maintenance means that even though the software had been delivered to the customer even for quite some time, if there are any bugs or dissatisfactions on the part of the customer, the maintenance crew will have to be on the rescue. This is very important although software companies often overlook it. You must ask why I am so adamant in this. The answer is pretty straightforward if we think logically.

    Without reviews and maintenance, the software company’s reputation could be easily tarnished. When a certain software failed, customers expect the software company to debug it because they are illiterate or not that familiar with the development of the software. Also, the amount of money that the client had paid would encompass every stage of the development of the software inclusive of review and maintenance. If maintenance is not delivered, no other companies dare to employ this software company ever again to develop software.

    Secondly, even if maintenance is not stated in the written contract, it is unethical not to continue on the development phase until the maintenance stage. The logic behind this argument is that the team that developed the software knows best the way and techniques that was employed in building the software. If another team from another company is employed to debug the program, they might have a hard time just to get themselves acquainted with the problematic software. This might mean that more time and finance is spent on maintenance than the program being maintained by its native creators.

     

    Question 2

    Rapid prototyping methodologies

    1.0 General view on the design process

    Various models of the engineering design process have been discussed in many publications. Generally, this process is understood as a kind of cycle of sequenced or parallely executed activities starting usually with goal generation and ending with implementation of the working solution.

    In the current project we have used the following definition of the subsequent phases of the design cycle:

    Goal generation - specification of requirements,

    Model building and validation,

    Synthesis of appropriate technical solutions,

    Computer based evaluation of the resulting system,

    Prototype (experimental) implementation of tested solutions,

    Final practical implementation.

    It is important to note that the design process has highly iterative nature, so the subsequent phases are usually repeated as long the actual requirements (sub-goals) are not met. In case of fails it may occur necessary to return to the preceding phases and to revise the partial solutions already obtained.

    In prototyping, requirements are identified together with users following a prototype is quickly assembled. The users will then try the working model and voice out any corrections and amendments needed. Necessary alterations are then done. The users will keep on testing the refined model until it is acceptable.

    Prototyping can then be seen as a much-improved form of system investigation and analysis as well as an aid to design. Users will get to try out a working model, rather than an abstract comprehension of what the system will be. As for the analysts, they can better encourage the users’ participation as well improve their communication between the users.

    A successful system development effort must be accompanied by a set of user requirements. However, this is not always the case. Sometimes, information gathering can be a laborious and tedious job, not to mention the amount of errors that can exist in this phase. Also, users are not able to visualize the complete software.

    Prototyping can assist the user to visualize how the proposed system will work. A prototype is actually a working or a stripped down version of the model inclusive of sufficient functionality. It has the ability to set a "live" setting. Prototyping could involve addition of new capabilities in the later stage.




     

     

     

     

     

     

     

     

     

     

     

     

     

    The Prototyping Process

    Prototyping can be seen as an iterative process as shown on the diagram the leaf over. It has it beginnings in an initial prototype, based solely on the requirements / deterministic stage. The prototype is built and updated fairly rapidly, usually in the matter of weeks or months depending on the size of the prototype. The users work with the prototype. Refinements to the prototype will be made gradually when there is request for it by the users. This process will have to continue and repeated a few times until the user is satisfied. Sometimes, the prototype can actually be placed in the production.

    CASE STUDY – Intel Corporation.

     

    One Monday in April 1970, Intel Corp, the largest chipmaker for personal computers, decided to further computerize its manufacturing operations. The option was a natural one as the Chief Executive Officer had noticed the rapid development of silicon as the new tool over steel. Therefore he had instructed the MIS Manager to look at outsource as an alternative for the required software to integrate into the manufacturing process. The MIS Manager then submitted a tender for a few renowned vendors to bid for it. At long last, after much selection criteria had been taken into consideration, the MIS Manager decided to award the project to CSA for the hardware setup and to Arthur Anderson to develop the software.

    During the talks to formulate an implementation plan and to synchronize both companies’ standards of implementation, a lot of matters were brought up. Some of it including which software would be used by CSA so that Arthur Anderson will be able to purchase what hardware. Other matters that did arise were the projected cost and also the methods of development of the systems. Of all the methodologies considered, only prototyping sounds more familiar to all parties. Also, prototyping will be able to show the results much earlier; thus able to exhibit the results of the development much earlier to Intel Corp. Intel Corporation’s MIS Manager, Mr. Howard Matthews also approved of this.

    There were a few prototype models open for considerations. After much work, there were three that was short-listed for final consideration. Among them are:

    Mapie Works

    Congenial Software

    Skunk Prototype Model

    These three are all actually very accomplished prototypes and each has its own strengths and merits. Unfortunately, all good products must have some drawbacks. Mapie Works is excellent in developing live models and designing models in it is simple as it has the most user-friendly GUI (graphical user interface) of all the three reviewed prototypes. Unfortunately, the end product will be very hungry for resources and will not run on anything slower than a Pentium II 300MHz with 128 MB of RAMs. Also, it requires specialized 3D graphic cards with OpenGL chipset. This really set the model back.

    Congenial Software on the other hand is not as hardware demanding as Mapie Works’ model. Unfortunately, the GUI is really a big letdown and the processing speed is really down the drains. It is not what a big companies’ idea of enhancement.

    Skunk Prototype Model is by far the most appealing model. It does not require as much resources to operate as Mapie and it is fairly easy to be learnt up by the Intel’s employees. Development cost was also kept to a bare minimum because the technology that was implemented into the system is fairly popular and so it is readiliy available in the market. Skunk was finally the winner.

    As the development phase generally went by without a hitch, there were some employees especially those who had worked long with Intel refused to co-operate with the third party companies in the fact-finding phase for fear of their jobs being replaced by the software. This contributed to some delays. After being reprimanded by their superiors, did they agree to co-operate. There were some power failures too during the final day of testing which led to a few critical startup files being corrupted.

    As prototype is about fulfilling the users’ needs as full as possible, further refining was done to the software to comply with Intel’s specific requirements. The system was finally implemented on of 31st December 1972, the start of a new financial year. The project took more than 1½ years to complete because Intel is a very fuzzy clientele. Intel only wanted the best as they are preparing to set new standards for PCs around the world. As Intel paid dearly for the software and all the development costs, they demand the best maintenance and support. CSA and Arthur Anderson has been under an Agreement of Maintenance (AoM) for 10 years. Besides that, further refinements were made as the business environment changed over time. This was stated also in the AoM.

    *Note : As there was no apparent Case Studies available, the scenario stated above is purely the imagination of the author. It was created merely to illustrate how a system was developed via Prototyping.

     

     

    Press To Return To My Main Page