Software Engineering

Written Test #1 on Code Complete


1._T__ (T. for F.)   Debugging aids should be incorporated into early stage of coding . For C++, you can use the compiler preprocessor features such as #if defined( ) to conveniently control the levels of debugging aid used or to completely remove the debug aid when trying to produce optimized executables.


2. _F__ (T. for F.) In general, a high-quality module should exhibit strong coupling and loose cohesion.


3. __F_ (T. for F.)  In using psuedo-code programming, we should carefully document the preconditions and postconditions of a routine. However, in defensive programming, we don’t want to assert these preconditions and postconditions in the routine since this always significantly slows down the execution severely.


4. _F_ (T. for F.)  Testing can be conducted side by side with the code development and or after the coding. But it is not very useful to have test cases even before starting the coding since it doesn’t help the requirement analysis, specification, or coding.


5. _T__ (T. for F.) In general, a high-quality module should be constructed in a way that the user can use it without much knowledge of complex data structures or the logic used in the implementation inside the module.


6. _F__ (T. for F.)  Most of the time in the industry, developers can exhaustively test all possible inputs to a routine or a program to verify whether the implementation is correct in all possible cases.


7.__F_ (T. for F.)  Both collaborative development of code (such as code review and inspection between peer programmers) and testing of code can help find faults and errors hidden in the code. However testing is far more economic in cost and far more effective in finding most errors.


8. __T_ (T. for F.)  Structured basis testing assures that each statement of the code is executed at least once in some of the test cases. For a code segment such as 

 if (x>1  | |  y<1)  z--;                          

we need a case that will run through the segment with  (x>1 ) being true, a case that will run through the segment with  y<1 ) being true, and a case that will run through the segment with  (x>1  | |  y<1) being false.



9. _T__ (T. for F.)  For data-flow testing, we are concerned with the states of data and the program executions under different trajectory of changing data states. For a single variable, it typically goes from Defined to Used for a while and then may be Killed in the end.  Going from Killed  to Used  or from Killed to Killed again implies serious misuse of the data in the code. 


10. _T__ (T. for F.)  Structured basis testing already assures all statements about the definitions of data (i.e. declaration and initialization of variables) are executed at least once by some test cases.  In data-flow testing, we want to have test cases running through all the possible Defined-Used combinations of data (key variables).  The structured basis sentence is confusing.  Do you mean assures or assumes?  Structured basis testing’s goal is to make sure that each statement in the program is tested at least once.


11. _F__ (T. for F.)  When using variables, long spans between references to a particular variable is a good programming practice that can reinforce the programmer’s understanding of the role of the variable in the program.


12. _T__ (T. for F.) It is a good practice to keep the declaration of variables and the usage of them close together and to initialize the values of variables when they are declared.


13. As his program evolves, Larry needs to enhance the code of a routine A and then the code of another routine B. It turns out that A and B have a lot of common functionality, and thus Larry decides to copy a big chunk of existing code in A into B to provide the functionality in B. Is this a good way to structure the code? Why or why not? If it is not good, what would you do alternatively?  This is not a good idea.  First of all, this sounds like a very error prone solution.  Unless he’s carefully thought through how he can add the chunks of code together, doing so is never a good idea.  Secondly, this exhibits an inferior sort of cohesion.  It is not functional cohesion, since routine B now does more than the one function it was designed for.  It is closer to temporal cohesion, which involves combining routines that do not do the same task.  Larry has several options for handling this, including creating a new routine with the shared code from A and B.  This new routine will reduce the redundancy and complexity of the code currently in A and B.  Further, he could redesign A and B to combine them into one routine.  He must do so carefully, since changing the design is not something that can be sloppily done.  He must consider the repercussions it will have on the rest of the code, and the necessary changes that will have to be made to the functionality. 



14. Briefly explain your understandings of cohesion and coupling in the context of the construction and composition of routines, packages, and modules.  Cohesion deals with routine construction, and is best when a routine does only one operation that it is defined to handle.  Other sorts of cohesion of lesser quality exist (sequential – cannot be out of order, since data is shared, communicational – data is shared between routines that aren’t in any way related, and temporal -- routines combined because they are all done at the same time.).  Coupling involves the connections between different parts of a program.  To avoid unnecessary complexity, they should be kept to a minimum number of simple connections.  Minimal connectedness makes for easier testing and what not later on.



15. Consider the following C++  function:

bool func(int x, int y)  { if ( x>0 && x/y > 1) return 1;   else  return 0; }

What do you think should be the preconditions of this function?  Could the function trust all the input data without checking? Why or why not?  If not, apply defensive programming here to avoid potential problems.  Preconditions:  y != 0, x and y are not fractal, and they are not greater than the int can hold.  If y is zero, this will cause an error.  The other two may simply cause truncations.  Thus, the input data could not be trusted since it is possible the user will enter zero for y or a fractal or too great a number for the int to handle.  Implementation of either an assertion that y is not zero or a simple if that checks to see if it is will effectively control the y != 0 problem.  Other checks could be used (assertions and what not) to ensure that x and y do not exceed the bounds of the integer, and fractions could be either truncated or disallowed through an if statement. 



16. Consider the situation when using C++: (i) you are implementing a linked list class for storing information in linked-list objects and (ii) you want to provide a copy constructor for copying the contents of a linked-list object to initialize the contents of a newly created linked-list object. You can use the default shallow copy of C++ or implement deep copy into your copy constructor. What is the difference between these two options? Which one do you prefer? Why?  A deep copy is a memberwise copy of every data member in the object, and a shallow copy just points or refers to a single reference copy.  There are numerous problems with the shallow copy, but it is primarily used to improve performance.  The deep copy is easier to code, and is less error prone.