New Semester
Started
Get
50% OFF
Study Help!
--h --m --s
Claim Now
Question Answers
Textbooks
Find textbooks, questions and answers
Oops, something went wrong!
Change your search query and then try again
S
Books
FREE
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Tutors
Online Tutors
Find a Tutor
Hire a Tutor
Become a Tutor
AI Tutor
AI Study Planner
NEW
Sell Books
Search
Search
Sign In
Register
study help
computer science
introduction to software engineering
Software Engineering A Practitioner's Approach 7th Edition Roger Pressman - Solutions
20.17. In response to it success YourCornerPharmacy.com (Problem 20.11) has implemented a special server solely to handle prescription refills. On average, 1000 concurrent users submit a refill request once every two minutes. The WebApp downloads a 500-byte block of data in response. What is the
20.16. YourCornerPharmacy.com (Problem 20.11) has become wildly successful, and the number of users has increased dramatically in the first two months of operation. Draw a graph that depicts probable response time as a function of number of users for a fixed set of serverside resources. Label the
20.15. What is the objective of security testing? Who performs this testing activity?
20.14. Is it possible to test every configuration that a WebApp is likely to encounter on the server side? On the client side? If it is not, how do you select a meaningful set of configuration tests?
20.13. What is the difference between testing for navigation syntax and navigation semantics?
20.12. Assume that you have implemented a drug interaction checking function for YourCornerPharmacy.com (Problem 20.11). Discuss the types of component-level tests that would have to be conducted to ensure that this function works properly. [Note: A database would have to be used to implement this
20.11. Assume that you are developing an online pharmacy (YourCornerPharmacy.com)that caters to senior citizens. The pharmacy provides typical functions, but also maintains a database for each customer so that it can provide drug information and warn of potential drug interactions. Discuss any
20.10. What is the difference between testing that is associated with interface mechanisms and testing that addresses interface semantics?
20.9. Describe the steps associated with database testing for a WebApp. Is database testing predominantly a client-side or server-side activity?
20.8. Is content testing really testing in a conventional sense? Explain.
20.7. Is it fair to say that the overall WebApp testing strategy begins with user-visible elements and moves toward technology elements? Are there exceptions to this strategy?
20.6. Is it always necessary to develop a formal written test plan? Explain.
20.5. What elements of the WebApp can be “unit tested”? What types of tests must be conducted only after the WebApp elements are integrated?
20.4. Which errors tend to be more serious—client-side errors or server-side errors? Why?
20.3. Compatibility is an important quality dimension. What must be tested to ensure that compatibility exists for a WebApp?
20.2. In your own words, discuss the objectives of testing in a WebApp context.
20.1. Are there any situations in which WebApp testing should be totally disregarded?
Are certain WebApp functions (e.g., compute intensive functionality, data streaming capabilities) discontinued as capacity reaches the 80 or 90 percent level?
If the system does fail, how long will it take to come back online?
What values of N, T, and D force the server environment to fail? How does failure manifest itself? Are automated notifications sent to technical support staff at the server site?
Is data integrity affected as capacity is exceeded?
Are transactions lost as capacity is exceeded?
Does the server queue resource requests and empty the queue once capacity demands diminish?
Does server software generate “server not available” messages? More generally, are users aware that they cannot reach the server?
Does the system degrade “gently,” or does the server shut down as capacity is exceeded?
If proxy servers are used, have differences in their configuration been addressed with on-site testing?
Have system administrator errors been examined for their effect on WebApp operations?
Do server-side WebApp scripts execute properly?
Is the WebApp properly integrated with database software? Is the WebApp sensitive to different versions of database software?
Has the WebApp been tested with the distributed server configuration11(if one exists) that has been chosen?
Do system security measures (e.g., firewalls or encryption) allow the WebApp to execute and service users without interference or performance degradation?
Are system files, directories, and related system data created correctly when the WebApp is operational?
Is the WebApp fully compatible with the server OS?
Does the user understand his location within the content architecture as the NSU is executed?
If a node within an NSU is reached from some external source, is it possible to process to the next node on the navigation path? Is it possible to return to the previous node on the navigation path?
Is every node reachable from the site map? Are node names meaningful to end users?
Is there a way to discontinue the navigation before all nodes have been reached, but then return to where the navigation was discontinued and proceed from there?
If a function is executed at a node and an error in function processing occurs, can the NSU be completed?
If a function is to be executed at a node and the user chooses not to provide input, can the remainder of the NSU be completed?
Do mechanisms for navigation within a large navigation node (i.e., a long Web page) work properly?
Is there a mechanism (other than the browser “back” arrow) for returning to the preceding navigation node and to the beginning of the navigation path?
If guidance is provided by the user interface to assist in navigation, are directions correct and understandable as navigation proceeds?
If the NSU can be achieved using more than one navigation path, has every relevant path been tested?
Is every navigation node (defined for an NSU) reachable within the context of the navigation paths defined for the NSU?
Is the NSU achieved in its entirety without error?
5. Develop a mechanism for assessing the usability of the WebApp.
4. Instrument participants’ interaction with the WebApp while testing is conducted.
3. Select participants who will conduct the tests.
2. Design tests that will enable each goal to be evaluated.
1. Define a set of usability testing categories and identify goals for each.
Test to determine whether the WebApp can recall shopping cart contents at some future date (assuming that no purchase was made).
Test to determine the persistence of shopping cart contents (this should be specified as part of customer requirements).
Test to determine whether a purchase empties the cart of its contents.
Test proper deletion of an item from the shopping cart.
Test a “check out” request for an empty shopping cart.
Boundary-test (Chapter 18) the minimum and maximum number of items that can be placed in the shopping cart.
Does the aesthetic style of the content conflict with the aesthetic style of the interface?
Does the content contain internal links that supplement existing content? Are the links correct?
Does the content infringe on existing copyrights or trademarks?
Is the content offensive, misleading, or does it open the door to litigation?
Is the information presented consistent internally and consistent with information presented in other content objects?
Have proper references been provided for all information derived from other sources?
Can information embedded within a content object be found easily?
Is the layout of the content object easy for the user to understand?
Is the information concise and to the point?
Is the information factually accurate?
8. Security tests are conducted in an attempt to exploit vulnerabilities in the WebApp or within its environment.9. Performance tests are conducted.10. The WebApp is tested by a controlled and monitored population of end users;the results of their interaction with the system are evaluated for
7. The WebApp is implemented in a variety of different environmental configurations and is tested for compatibility with each configuration.
6. Navigation throughout the architecture is tested.
5. Functional components are unit tested.
4. The user interface is tested to uncover errors in presentation and/or navigation mechanics.
3. The design model for the WebApp is reviewed to uncover navigation errors.
2. The interface model is reviewed to ensure that all use cases can be accommodated.
1. The content model for the WebApp is reviewed to uncover errors.
5. Some errors are due to the static operating environment (i.e., the specific configuration in which testing is conducted), while others are attributable to the dynamic operating environment (i.e., instantaneous resource loading or time-related errors).
4. Because WebApps reside within a client-server architecture, errors can be difficult to trace across three architectural layers: the client, the server, or the network itself.
3. Although some errors are the result of incorrect design or improper HTML(or other programming language) coding, many errors can be traced to the WebApp configuration.
2. Because a WebApp is implemented in a number of different configurations and within different environments, it may be difficult or impossible to reproduce an error outside the environment in which the error was originally encountered.
1. Because many types of WebApp tests uncover problems that are first evidenced on the client side (i.e., via an interface implemented on a specific browser or a personal communication device), you often see a symptom of the error, not the error itself.
Security is tested by assessing potential vulnerabilities and attempting to exploit each. Any successful penetration attempt is deemed a security failure?
Interoperability is tested to ensure that the WebApp properly interfaces with other applications and/or databases.
Compatibility is tested by executing the WebApp in a variety of different host configurations on both the client and server sides. The intent is to find errors that are specific to a unique host configuration.
Performance is tested under a variety of operating conditions, configurations, and loading to ensure that the system is responsive to user interaction and handles extreme loading without unacceptable operational degradation.
Navigability is tested to ensure that all navigation syntax and semantics are exercised to uncover any navigation errors (e.g., dead links, improper links, erroneous links).
Usability is tested to ensure that each category of user is supported by the interface and can learn and apply all required navigation syntax and semantics.
Structure is assessed to ensure that it properly delivers WebApp content and function, that it is extensible, and that it can be supported as new content or functionality is added.
Function is tested to uncover errors that indicate lack of conformance to customer requirements. Each WebApp function is assessed for correctness, instability, and general conformance to appropriate implementation standards (e.g., Java or AJAX language standards).
Content is evaluated at both a syntactic and semantic level. At the syntactic level, spelling, punctuation, and grammar are assessed for text-based documents. At a semantic level, correctness (of information presented), consistency (across the entire content object and related objects), and lack of
19.8. Derive four additional tests using random testing and partitioning methods as well as multiple class testing and tests derived from the behavioral model for the banking application presented in Sections 19.5 and 19.6.
19.7. Apply multiple class testing and tests derived from the behavioral model for the SafeHome design.
19.6. Apply random testing and partitioning to three classes defined in the design for the SafeHome system. Produce test cases that indicate the operation sequences that will be invoked.
19.5. What is the difference between thread-based and use-based strategies for integration testing? How does cluster testing fit in?
19.4. Derive a set of CRC index cards for SafeHome, and conduct the steps noted in Section 19.2.2 to determine if inconsistencies exist.
19.3. Why should “testing” begin with object-oriented analysis and design?
19.2. Why do we have to retest subclasses that are instantiated from an existing class, if the existing class has already been thoroughly tested? Can we use the test-case design for the existing class?
19.1. In your own words, describe why the class is the smallest reasonable unit for testing within an OO system.
3. A list of testing steps should be developed for each test and should contain:a. A list of specified states for the class that is to be testedb. A list of messages and operations that will be exercised as a consequence of the testc. A list of exceptions that may occur as the class is testedd. A
2. The purpose of the test should be stated.
1. Each test case should be uniquely identified and explicitly associated with the class to be tested.
3. The behavior of the system or its classes may be improperly characterized to accommodate the extraneous attribute.
Showing 200 - 300
of 1534
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Last
Step by Step Answers