Evaluation methods in Interactive designing
Today I come with a new topic about Evaluation method in Interactive designing. evaluation methods described in this article so far have involved interaction with, or direct observation of, users. it is not practical to involve users in an evaluation because they are not available, there is insufficient time, or it is difficult to find people. In such circumstances, other people, often referred to as experts or researchers, can provide feedback. These are knowledgeable about both interaction design and the needs and typical behavior of users.
Heuristic evaluation is a process where experts use rules of thumb to measure the usability of user interfaces in independent walkthroughs and report issues. Evaluators use established heuristics (e.g., Nielsen-Molich’s) and reveal insights that can help design teams enhance product usability from early in development.
There are ten usability heuristic guidelines.
Jakob Nielsen’s 10 general principles for interaction design. They are called “heuristics” because they are broad rules of thumb and not specific usability guidelines.
#1: Visibility of system status
The design should always keep users informed about what is going on, through appropriate feedback within a reasonable amount of time.
#2: Match between system and the real world
The design should speak the users’ language. Use words, phrases, and concepts familiar to the user, rather than internal jargon. Follow real-world conventions, making information appear in a natural and logical order.
#3: User control and freedom
Users often perform actions by mistake. They need a clearly marked “emergency exit” to leave the unwanted action without having to go through an extended process.
#4: Consistency and standards
Users should not have to wonder whether different words, situations, or actions mean the same thing. Follow platform and industry conventions.
#5: Error prevention
Good error messages are important, but the best designs carefully prevent problems from occurring in the first place. Either eliminate error-prone conditions, or check for them and present users with a confirmation option before they commit to the action.
#6: Recognition rather than recall
Minimize the user’s memory load by making elements, actions, and options visible. The user should not have to remember information from one part of the interface to another. Information required to use the design (e.g. field labels or menu items) should be visible or easily retrievable when needed.
#7: Flexibility and efficiency of use
Shortcuts — hidden from novice users — may speed up the interaction for the expert user such that the design can cater to both inexperienced and experienced users. Allow users to tailor frequent actions.
#8: Aesthetic and minimalist design
Interfaces should not contain information which is irrelevant or rarely needed. Every extra unit of information in an interface competes with the relevant units of information and diminishes their relative visibility.
#9: Help users recognize, diagnose, and recover from errors
Error messages should be expressed in plain language (no error codes), precisely indicate the problem, and constructively suggest a solution.
#10: Help and documentation
It’s best if the system doesn’t need any additional explanation. However, it may be necessary to provide documentation to help users understand how to complete their tasks.
Note from Jakob
I originally developed the heuristics for heuristic evaluation in collaboration with Rolf Molich in 1990 [Molich and Nielsen 1990; Nielsen and Molich 1990]. Four years later, I refined the heuristics based on a factor analysis of 249 usability problems [Nielsen 1994a] to derive a set of heuristics with maximum explanatory power, resulting in this revised set of heuristics [Nielsen 1994b].
In 2020, we updated this article, adding more explanation, examples, and related links. While we slightly refined the language of the definitions, the 10 heuristics themselves have remained relevant and unchanged since 1994. When something has remained true for 26 years, it will likely apply to future generations of user interfaces as well.
Molich, R., and Nielsen, J. (1990). Improving a human-computer dialogue, Communications of the ACM 33, 3 (March), 338–348.
Nielsen, J., and Molich, R. (1990). Heuristic evaluation of user interfaces, Proc. ACM CHI’90 Conf. (Seattle, WA, 1–5 April), 249–256.
Nielsen, J. (1994a). Enhancing the explanatory power of usability heuristics. Proc. ACM CHI’94 Conf. (Boston, MA, April 24–28), 152–158.
Nielsen, J. (1994b). Heuristic evaluation. In Nielsen, J., and Mack, R.L. (Eds.), Usability Inspection Methods, John Wiley & Sons, New York, NY.
An interactive walkthrough helps users adjust to a new program or process with on-screen guidance. Pop-up balloons to show users where to click and give instructions on what to do next. Interactive workflows are frequently used as a part of user training and onboarding.
Common types include:
· Product tours to show new users how to navigate the application,
· Process flows to help users finish tasks correctly, or
· Feature introduction to show existing users how a new feature works
How do software walkthroughs help users?
Interactive walkthroughs can significantly improve user onboarding and adoption by making your application easier to use. The software tour is like having an experienced guide sit next to the new user and show them how to use the application.
Even the best-designed software can be difficult to master at first. A good product tour can help novice users feel like experts. Think about tax preparation software. Most people who use TurboTax or other do-it-yourself tax preparation services are not accountants. They’re not tax experts.
So how do tax novices end up completing their taxes without the help of a professional? The software makes it easy by asking a series of questions and guiding users through the process. This approach makes tax preparation simple and faster.
Essentially, the entire program is an interactive walkthrough for your taxes. While the tasks your users are completing may not be as complicated as taxes, they still reap the benefits of making their jobs easier with on-screen guidance.
Walkthrough vs. Product Tours
Walkthroughs and product tours are similar, but they aren’t quite the same. They both guide end users through processes to increase efficiency, but they have different functions and end goals.
Walkthroughs are used to teach customers and employees alike. For example, when an enterprise company decides to start using a new human resources information system (HRIS), they will create multiple walkthroughs to teach their employees how to use the new tool efficiently. The end goal of using these interactive guides is faster user adoption of new tools or newly released features.
A product tour, on the other hand, is used — usually by SaaS companies and web applications — to increase customer engagement and retention and highlight the most useful features of an application or digital tool. For example, if Salesforce wanted to show their customers how valuable their tool is upfront to increase retention, they would invest in a product walkthrough.
Interactive walkthroughs are the new-age help manuals for the proactive and action-oriented users, who want to learn an app on their own and in the fastest way possible. Interactive walkthroughs have deservedly become the de facto standard in faster user onboarding, training, product tours, and support-related activities. Experience the power of an interactive walkthrough, delight your users, and watch your business grow! Your organization would not want to be left behind because of the steep learning curve of your app.
Web analytics, or metrics, are measures of behavior of website users that are automatically collected across entire visitor populations or large samples. Data such as the number of visitors to a website, where they are from, which pages they view, and which links they click, can be measured, collected, analyzed and reported. Web analytics are relevant to usability practitioners in that they can provide insight into the large-scale behavior of website users to understand and improve (optimize) the website.
Web analytics cannot provide answers to questions about user motivations or underlying needs and goals. Web analytics may indicate that users are abandoning a checkout process at a particular point, but they cannot be used to explain why this is happening. Usability testing of the issues that are found through web analytics brings the deeper understanding needed to fix these usability problems. Additionally, in-person observations of users can lead to insight that informs what metrics are worthwhile to collect, and how to interpret them.
There are two categories of web analytics:
· Off-site web analytics
· On-site web analytics
Off-site web analytics refers to web measurement and analysis, regardless of whether you own or maintain a website. It includes the measurement of a website’s potential audience (opportunity), share of voice (visibility), and buzz (comments) that is happening on the Internet as a whole.
On-site web analytics measure a visitor’s journey once on your website. This includes its drivers and conversions (that is, when users take a desired action on the website such as request a white paper). For example, which landing pages encourage people to make a purchase? On-site web analytics typically compares data about user behavior against key performance indicators such as purchases, downloads, enrollments, or any activity that ties into the goals of the organization. The subsequent analysis of this data guides the improvements to a web site or marketing campaign’s audience response.
Historically, web analytics has referred to on-site visitor measurement. However, this has blurred as vendors are producing tools that span both categories.
A/B testing (experimentation) can be used in combination with web metric analysis to measure the impact of design, wording, or algorithm factors. In a typical simple A/B test, two variations of an interface are presented to two randomly selected samples of users, and the impact of the two variations on key success measures, such as conversion, is statistically analyzed.
A/B testing that pit two drastically different designs against each other often raises more questions than it answers, because decision makers may not understand what aspects of the “winning” variant contributed to its success, since there are so many interaction effects. A/B testing is most powerful when the variants are informed by formative user research and in-person usability testing, and when conducted for a period of time sufficient for accounting for learning curves and novelty effects.
Why you should A/B test
A/B testing allows individuals, teams and companies to make careful changes to their user experiences while collecting data on the results. This allows them to construct hypotheses and to learn why certain elements of their experiences impact user behavior. In another way, they can be proven wrong — their opinion about the best experience for a given goal can be proven wrong through an A/B test.
More than just answering a one-off question or settling a disagreement, A/B testing can be used to continually improve a given experience or improve a single goal like conversion rate over time.
A B2B technology company may want to improve their sales lead quality and volume from campaign landing pages. In order to achieve that goal, the team would try A/B testing changes to the headline, visual imagery, form fields, call to action and overall layout of the page.
Testing one change at a time helps them pinpoint which changes had an effect on visitor behavior, and which ones did not. Over time, they can combine the effect of multiple winning changes from experiments to demonstrate the measurable improvement of a new experience over the old one.
Predictive modeling has been around for decades, but only recently was it considered a subset of AI, often linked to machine learning. It’s used to predict the likelihood of specific outcomes based on data collected from similar past and present events.
For example, with predictive modeling, you can calculate the probability that a customer will churn (unsubscribe or stop buying products in favor of a competitor’s). To achieve it, the model uses available data from customers who have churned before and from those who haven’t. This is done through patterns identified by machine learning algorithms to predict future trends.
While these predictions are commonly used for future events, they also apply to other conditions. Imagine that you want to classify the priority of a support ticket, based on its description text. After collecting data from similar tickets, you’ll be able to predict the priority of others with an accuracy rate that’ll increase with each prediction made.
The multitude of scenarios in which you can apply predictive modeling is one of the reasons its potential is so clear. But what type of benefits can you get from it?
“The time required to reach a target is based on the distance from the starting point and the size of the target.” Coined by Paul Fitts in the 1950s, the law is applied to the location and size of menus and buttons in software. For example, a large button is faster to reach than a small one, and the edges of the screen provide natural stops. Many users prefer the Mac’s user interface, because all menus display at the top of the screen. Others prefer Windows, because many commonly used buttons can be made much large
Course: UI Design Patterns for Successful Software:
When You Shouldn’t Use Fitts’s Law To Measure User Experience Anastasios Karafillis (Smashing Magazine)
Human Factors and Fits’ Law Ken Goldberg, IEOR and EECS (Slides)