suggest three usability measures that can be directly used to produce a practical evaluation of a system. Keep the goals of efficiency and satisfaction in mind with these measures
Relationships between attributes and customer preferences are discovered in: a. conjoint analysis. b. cluster analysis. c. perceptual maps. d. gap analysis. Customer: A customer is a person or an entity who purchases goods or services from another entity. Customers are crucial because they drive revenue; organisations cannot survive without them.
Sensa Shampoo uses advertisements that encourage consumers to use Sensa if you want hair that makes heads turn.” The message in the ads for Sensa indicates it is using _ _ _ _ _ to attract consumers. a) benefit segmentation b) geodemographic segmentation c) demographic segmentation d) psychographic positioning e) volume segmentation Consumer Psychology: Consumer psychology is a field of study that applies knowledge about the mind and behavior to understand the various aspects of the consumption process. For example, a consumer psychologist might be interested in examining the various factors that lead to purchase behavior.
Answer to question 1
The Need for a Foundation for Evaluating Usability Evaluation Methods
Among interactive system developers and users there is now much agreement that usability is an essential quality of software systems. Among the HCI and usability communities, there is also much agreement that: • usability is seated in the interaction design, • an iterative, evaluation-centered process is essential for developing high usability in interaction designs, • usability, or at least usability indicators, can be viewed as quantitative and measurable, and • a class of usability techniques called UEMs have emerged to carry out essential usability evaluation and measurement activities. Beyond this level of agreement, however, there are many ways to evaluate the usability of an interaction design (i.e., many UEMs), and there is much room for disagreement and discussion about the relative merits of the various UEMs. As more new methods are being introduced, the variety of alternative approaches and a general lack of understanding of the capabilities and limitations of each has intensified the need for practitioners and others to determine which methods are more effective, and in what ways and for what purposes. In reality, researchers find it difficult to reliably compare UEMs because of a lack of: • standard criteria for comparison, • standard definitions, measures, and metrics on which to base the criteria, and • stable, standard processes for UEM evaluation and comparison. Lund (1998) noted the need for a standardized set of usability metrics, citing the difficulty in comparing various UEMs and measures of usability effectiveness. As Lund points out, there is no single standard for direct comparison, resulting in a multiplicity of different measures used in the studies, capturing different data defined in different ways. Consequently very few studies clearly Usability Evaluation Method Evaluation Criteria Submitted for publication; do not copy or cite without permission. Copyright © 2000. 4 identify the target criteria against which to measure success of a UEM being examined. As a result, the body of literature reporting UEM comparison studies does not support accurate or meaningful assessment or comparisons among UEMs. Some have tried to help by performing UEM comparison studies, but many such studies that have been reported were not complete or otherwise fell short of the kind of scientific contribution needed. Although these shortcomings often stemmed from practical constraints, they have led to substantial critical discussion in the HCI literature (Gray & Salzman, 1998; Olson & Moran, 1998). Accordingly, this paper presents a practical discussion of factors, comparison criteria, and UEM performance measures that are interesting and useful in studies comparing UEMs. We have attempted to highlight major considerations and concepts, offering some operational definitions and exposing the hazards of some approaches proposed or reported in the literature. In demonstrating the importance of developing appropriate UEM evaluation criteria, we present some different possible measures of effectiveness, select and review studies that use two of the more popular measures, and consider the trade-offs among different criterion definitions. This work highlights some of the specific challenges that researchers and practitioners face when comparing UEMs and provides a point of departure for further discussion and refinement of the principles and techniques used to approach UEM evaluation and comparison.
Types of Evaluation and Types of UEMs In order to understand UEMs and their evaluation, one must understand evaluation in the context of usability. We have adopted Scriven’s (1967) distinction between two basic approaches to evaluation based on the evaluation objective. Formative evaluation is evaluation done during development to improve a design and summative evaluation is evaluation done after development to assess a design (absolute or comparative). Phrasing Scriven’s definitions in terms of usability, Usability Evaluation Method Evaluation Criteria Submitted for publication; do not copy or cite without permission. Copyright © 2000. 5 formative evaluation is used to find usability problems to fix so that an interaction design can be improved. Summative evaluation is used to assess and/or compare the level of usability achieved in an interaction design. UEMs are used to perform formative, not summative, usability evaluation of interaction designs. Formal experimental design, including a test for statistical significance, is used to perform summative evaluation and is often used to compare design factors in a way that can add to the accumulated knowledge within the field of HCI. Sometimes formative usability evaluation can also have a component with a summative flavor. Some UEMs support collection of quantitative usability data in addition to the qualitative data (e.g., usability problem lists). For example, measurement of user task performance quantitatively in terms of time-on-task and error rates adds a summative flavor to the formative process because it is used to assess the level of usability. Not being statistically significant, these results do not contribute (directly) to the science of usability, but are valuable usability engineering measures within a development project. Usability engineers, managers, and marketing people use quantitative usability data to identify convergence of a design to an acceptable level of usability, to know when to stop iterating the development process, and to attain a competitive edge in marketing a product. A somewhat orthogonal perspective is used to distinguish evaluation methods in terms of how evaluation is done. Hix and Hartson (1993) describe two kinds of evaluation: analytic and empirical. Analytic evaluation is based on analysis of the characteristics of a design, through examination of a design representation, prototype, or implementation. Empirical evaluation is based on observation of the performance of the design in use. Perhaps Scriven (1967), as described by Carroll, Singley, and Rosson (1992), gets at the essence of the differences better by calling these types of evaluation, respectively, intrinsic evaluation and pay-off evaluation. Intrinsic evaluation is accomplished by way of an examination and analysis of the attributes of a design without actually putting the design to work, whereas pay-off evaluation is evaluation situated in observed usage
Answer to question 2
The correct answer is option A. Conjoint analysis is a superior product and pricing research method that reveals consumer preferences. It uses that knowledge to assist pick product features, analysing price sensitivity, anticipating market shares, and predicting acceptance of new products or services. Conjoint analysis is widely utilised in various industries for various items, including consumer goods, electrical goods, life insurance, retirement homes, luxury goods, and air travel. It can be used in various situations involving determining what type of product buyers are likely to purchase. Reason for incorrect answers: Option B: It is incorrect because clustering is a statistical anomaly detection method for discovering and grouping related data points in huge datasets without regard for the outcome. Clustering (also known as cluster analysis) is a technique for organising data into more easily comprehended and modifying structures. Option C: It is incorrect because perceptual mapping is a technique that market researchers and businesses use to illustrate and analyse how to reach customers’ opinions and feelings about a certain brand or product. These informative charts help businesses to evaluate their brand’s competitive position. They also enable businesses to compare essential features to customers and uncover gaps in their marketplaces. Option D: It is incorrect because A gap analysis allows a corporation to recognise its current state by comparing it to its intended state by assessing time, money, and labour. The management team may design an action plan to drive the organisation ahead and close performance gaps by evaluating and assessing these gaps. To perform a gap analysis, you must first examine your existing position, decide your desired state, and identify the gap between the two. Then you may devise a strategy to close the gaps.
Answer to question 3
Answer and Explanation: The correct solution to this problem is provided by option A: benefit segmentation. Based on the given information, we know that advertisements used by Sensa Shamppo highlight the benefit of the product (i.e., obtaining hair that makes heads turn). This is the main characteristic of benefit segmentation. The other options are incorrect since the advertisements do not rely on demographic characteristics, geodemographic characteristics, the volume of demands, or psychological characteristics.