Articles on Smashing Magazine — For Web Designers And Developers https://www.smashingmagazine.com/ Recent content in Articles on Smashing Magazine — For Web Designers And Developers Sun, 14 Jul 2024 09:05:33 GMT https://validator.w3.org/feed/docs/rss2.html manual en Articles on Smashing Magazine — For Web Designers And Developers https://www.smashingmagazine.com/images/favicon/app-icon-512x512.png https://www.smashingmagazine.com/ All rights reserved 2024, Smashing Media AG Development Design UX Mobile Front-end <![CDATA[When Friction Is A Good Thing: Designing Sustainable E-Commerce Experiences]]> https://smashingmagazine.com/2024/07/designing-sustainable-e-commerce-experiences/ https://smashingmagazine.com/2024/07/designing-sustainable-e-commerce-experiences/ Wed, 10 Jul 2024 13:00:00 GMT As lavish influencer lifestyles, wealth flaunting, and hauls dominate social media feeds, we shouldn’t be surprised that excessive consumption has become the default way of living. We see closets filled to the brim with cheap, throw-away items and having the latest gadget arsenal as signifiers of an aspirational life.

Consumerism, however, is more than a cultural trend; it’s the backbone of our economic system. Companies eagerly drive excessive consumption as an increase in sales is directly connected to an increase in profit.

While we learned to accept this level of material consumption as normal, we need to be reminded of the massive environmental impact that comes along with it. As Yvon Chouinard, founder of Patagonia, writes in a New York Times article:

“Obsession with the latest tech gadgets drives open pit mining for precious minerals. Demand for rubber continues to decimate rainforests. Turning these and other raw materials into final products releases one-fifth of all carbon emissions.”

— Yvon Chouinard

In the paper, Scientists’ Warning on Affluence, a group of researchers concluded that reducing material consumption today is essential to avoid the worst of the looming climate change in the coming years. This need for lowering consumption is also reflected in the UN’s Sustainability goals, specifically Goal 17, “Ensuring sustainable consumption and production patterns”.

For a long time, design has been a tool for consumer engineering by for example, designing products with artificially limited useful life (planned obsolescence) to ensure continuous consumption. And if we want to understand specifically UX design’s role in influencing how much and what people buy, we have to take a deeper look at pushy online shopping experiences.

Design Shaping Shopping Habits: The Problem With Current E-commerce Design

Today, most online shopping experiences are designed with persuasion, gamification, nudging and even deception to get unsuspecting users to add more things to their basket.

There are “Hurry, only one item left in stock” type messages and countdown clocks that exploit well-known cognitive biases to nudge users to make impulse purchase decisions. As Michael Keenan explains,

“The scarcity bias says that humans place a higher value on items they believe to be rare and a lower value on things that seem abundant. Scarcity marketing harnesses this bias to make brands more desirable and increase product sales. Online stores use limited releases, flash sales, and countdown timers to induce FOMO — the fear of missing out — among shoppers.”

— Michael Keenan

To make buying things quick and effortless, we remove friction from the checkout process, for example, with the one-click-buy button. As practitioners of user-centered design, we might implement the button and say: thanks to this frictionless and easy checkout process, we improved the customer experience. Or did we just do a huge disservice to our users?

Gliding through the checkout process in seconds leaves no time for the user to ask, “Do I actually want this?” or “Do I have the money for this?”. Indeed, putting users on autopilot to make thoughtless decisions is the goal.

As a business.com article says: “Click to buy helps customers complete shopping within seconds and reduces the amount of time they have to reconsider their purchase.”

Amanda Mull writes from a user perspective about how it has become “too easy to buy stuff you don’t want”:

“The order took maybe 15 seconds. I selected my size and put the shoes in my cart, and my phone automatically filled in my login credentials and added my new credit card number. You can always return them, I thought to myself as I tapped the “Buy” button. [...] I had completed some version of the online checkout process a million times before, but I never could remember it being quite so spontaneous and thoughtless. If it’s going to be that easy all the time, I thought to myself, I’m cooked.”

— Amanda Mull

This quote also highlights that this thoughtless consumption is not only harmful to the environment but also to the very same user we say we center our design process around. The rising popularity of buy-now-pay-later services, credit card debt, and personal finance gurus to help “Overcoming Overspending” are indicators that people are spending more than they can afford, a huge source of stress for many.

The one-click-buy button is not about improving user experience but building an environment where users are “more likely to buy more and buy often.” If we care to put this bluntly, frictionless and persuasive e-commerce design is not user-centered but business-centered design.

While it is not unusual for design to be a tool to achieve business goals, we, designers, should be clear about who we are serving and at what cost with the power of design. To reckon with our impact, first, we have to understand the source of power we yield — the power asymmetry between the designer and the user.

Power Asymmetry Between User And Designer

Imagine a scale: on one end sits the designer and the user on the other. Now, let’s take an inventory of the sources of power each party has in their hands in an online shopping situation and see how the scale balances.

Designers

Designers are equipped with knowledge about psychology, biases, nudging, and persuasion techniques. If we don’t have the time to learn all that, we can reach for an out-of-the-box solution that uses those exact psychological and behavioral insights. For example, Nudgify, a Woocommerce integration, promises to help “you get more sales and reduce shopping cart abandonment by creating Urgency and removing Friction.”

Erika Hall puts it this way: “When you are designing, you are making choices on behalf of other people.” We even have a word for this: choice architecture. Choice architecture refers to the deliberate crafting of decision-making environments. By subtly shaping how options are presented, choice architecture influences individual decision-making, often without their explicit awareness.

On top of this, we also collect funnel metrics, behavioral data, and A/B test things to make sure our designs work as intended. In other words, we control the environment where the user is going to make decisions, and we are knowledgeable about how to tweak it in a way to encourage the decisions we want the user to make. Or, as Vitaly Friedman says in one of his articles:

“We’ve learned how to craft truly beautiful interfaces and well-orchestrated interactions. And we’ve also learned how to encourage action to meet the project’s requirements and drive business metrics. In fact, we can make pretty much anything work, really.”

— Vitaly Friedman

User

On the other end of the scale, we have the user who is usually unaware of our persuasion efforts, oblivious about their own biases, let alone understanding when and how those are triggered.

Luckily, regulation around Deceptive Design on e-commerce is increasing. For example, companies are not allowed to use fake countdown timers. However, these regulations are not universal, and enforcement is lax, so often users are still not protected by law against pushy shopping experiences.

After this overview, let’s see how the scale balances:

When we understand this power asymmetry between designer and user, we need to ask ourselves:

  • What do I use my power for?
  • What kind of “real life” user behavior am I designing for?
  • What is the impact of the users’ behavior resulting from my design?

If we look at e-commerce design today, more often than not, the unfortunate answer is mindless and excessive consumption.

This needs to change. We need to use the power of design to encourage sustainable user behavior and thus move us toward a sustainable future.

What Is Sustainable E-commerce?

The discussion about sustainable e-commerce usually revolves around recyclable packaging, green delivery, and making the site energy-efficient with sustainable UX. All these actions and angles are important and should be part of our design process, but can we build a truly sustainable e-commerce if we are still encouraging unsustainable user behavior by design?

To achieve truly sustainable e-commerce, designers must shift from encouraging impulse purchases to supporting thoughtful decisions. Instead of using persuasion, gamification, and deception to boost sales, we should use our design skills to provide users with the time, space, and information they need to make mindful purchase decisions. I call this approach Kind Commerce.

But The Business?!

While the intent of designing Kind Commerce is noble, we have a bitter reality to deal with: we live and work in an economic system based on perpetual growth. We are often measured on achieving KPIs like “increased conversion” or “reduced cart abandonment rate”. We are expected to use UX to achieve aggressive sales goals, and often, we are not in a position to change that.

It is a frustrating situation to be in because we can argue that the system needs to change, so it is possible for UXers to move away from persuasive e-commerce design. However, system change won’t happen unless we push for it. A catch-22 situation. So, what are the things we could do today?

  • Pitch Kind Commerce as a way to build strong customer relationships that will have higher lifetime value than the quick buck we would make with persuasive tricks.
  • Highlight reduced costs. As Vitaly writes, using deceptive design can be costly for the company:
“Add to basket” is beautifully highlighted in green, indicating a way forward, with insurance added in automatically. That’s a clear dark pattern, of course. The design, however, is likely to drive business KPIs, i.e., increase a spend per customer. But it will also generate a wrong purchase. The implications of it for businesses might be severe and irreversible — with plenty of complaints, customer support inquiries, and high costs of processing returns.”

— Vitaly Friedman

Helping users find the right products and make decisions they won’t regret can help the company save all the resources they would need to spend on dealing with complaints and returns. On top of this, the company can save millions of dollars by avoiding lawsuits for unfair commercial practices.

  • Highlight the increasing customer demand for sustainable companies.
  • If you feel that your company is not open to change practices and you are frustrated about the dissonance between your day job and values, consider looking for a position where you can support a company or a cause that aligns with your values.
A Few Principles To Design Mindful E-commerce

Add Friction

I know, I know, it sounds like an insane proposition in a profession obsessed with eliminating friction, but hear me out. Instead of “helping” users glide through the checkout process with one-click buy buttons, adding a step to review their order and give them a pause could help reduce unnecessary purchases. A positive reframing for this technique could be helpful to express our true intentions.

Instead of saying “adding friction,” we could say “adding a protective step”. Another example of “adding a protective step” could be getting rid of the “Quick Add” buttons and making users go to the product page to take a look at what they are going to buy. For example, Organic Basics doesn’t have a “Quick Add” button; users can only add things to their cart from the product page.

Inform

Once we make sure users will visit product pages, we can help them make more informed decisions. We can be transparent about the social and environmental impact of an item or provide guidelines on how to care for the product to last a long time.

For example, Asket has a section called “Lifecycle” where they highlight how to care for, repair and recycle their products. There is also a “Full Transparency” section to inform about the cost and impact of the garment.

Design Calm Pages

Aggressive landing pages where everything is moving, blinking, modals popping up, 10 different discounts are presented are overwhelming, confusing and distracting, a fertile environment for impulse decisions.

Respect your user’s attention by designing pages that don’t raise their blood pressure to 180 the second they open them. No modals automatically popping up, no flashing carousels, and no discount dumping. Aim for static banners and display offers in a clear and transparent way. For example, H&M shows only one banner highlighting a discount on their landing page, and that’s it. If a fast fashion brand like H&M can design calm pages, there is no excuse why others couldn’t.

Be Honest In Your Messaging

Fake urgency and social proof can not only get you fined for millions of dollars but also can turn users away. So simply do not add urgency messages and countdown clocks where there is no real deadline behind an offer. Don’t use fake social proof messages. Don’t say something has a limited supply when it doesn’t.

I would even take this a step further and recommend using persuasion sparingly, even if they are honest. Instead of overloading the product page with every possible persuasion method (urgency, social proof, incentive, assuming they are all honest), choose one yet impactful persuasion point.

Disclaimer

To make it clear, I’m not advocating for designing bad or cumbersome user experiences to obstruct customers from buying things. Of course, I want a delightful and easy way to buy things we need.

I’m also well aware that design is never neutral. We need to present options and arrange user flows, and whichever way we choose to do that will influence user decisions and actions.

What I’m advocating for is at least putting the user back in the center of our design process. We read earlier that users think it is “too easy to buy things you don’t need” and feel that the current state of e-commerce design is contributing to their excessive spending. Understanding this and calling ourselves user-centered, we ought to change our approach significantly.

On top of this, I’m advocating for expanding our perspective to consider the wider environmental and social impact of our designs and align our work with the move toward a sustainable future.

Mindful Consumption Beyond E-commerce Design

E-commerce design is a practical example of how design is a part of encouraging excessive, unnecessary consumption today. In this article, we looked at what we can do on this practical level to help our users shop more mindfully. However, transforming online shopping experiences is only a part of a bigger mission: moving away from a culture where excessive consumption is the aspiration for customers and the ultimate goal of companies.

As Cliff Kuang says in his article,

“The designers of the coming era need to think of themselves as inventing a new way of living that doesn’t privilege consumption as the only expression of cultural value. At the very least, we need to start framing consumption differently.”

— Cliff Kuang

Or, as Manuel Lima puts in his book, The New Designer,

“We need the design to refocus its attention where it is needed — not in creating things that harm the environment for hundreds of years or in selling things we don’t need in a continuous push down the sales funnel but, instead, in helping people and the planet solve real problems. [...] Designs’s ultimate project is to reimagine how we produce, deliver, consume products, physical or digital, to rethink the existing business models.”

— Manuel Lima

So buckle up, designers, we have work to do!

To Sum It Up

Today, design is part of the problem of encouraging and facilitating excessive consumption through persuasive e-commerce design and through designing for companies with linear and exploitative business models. For a liveable future, we need to change this. On a tactical level, we need to start advocating and designing mindful shopping experiences, and on a strategic level, we need to use our knowledge and skills to elevate sustainable businesses.

I’m not saying that it is going to be an easy or quick transition, but the best time to start is now. In a dire state of need for sustainable transformation, designers with power and agency can’t stay silent or continue proliferating the problem.

“As designers, we need to see ourselves as gatekeepers of what we are bringing into the world and what we choose not to bring into the world. Design is a craft with responsibility. The responsibility to help create a better world for all.”

— Mike Monteiro
]]>
hello@smashingmagazine.com (Anna Rátkai)
<![CDATA[Useful Customer Journey Maps (+ Figma & Miro Templates)]]> https://smashingmagazine.com/2024/07/customer-journey-maps-figma-miro-templates/ https://smashingmagazine.com/2024/07/customer-journey-maps-figma-miro-templates/ Mon, 08 Jul 2024 10:00:00 GMT User journey maps are a remarkably effective way to visualize the user’s experience for the entire team. Instead of pointing to documents scattered across remote fringes of Sharepoint, we bring key insights together — in one single place.

Let’s explore a couple of helpful customer journey templates to get started and how companies use them in practice.

This article is part of our ongoing series on UX. You might want to take a look at Smart Interface Design Patterns 🍣 and the upcoming live UX training as well. Use code BIRDIE to save 15% off.

AirBnB Customer Journey Blueprint

AirBnB Customer Journey Blueprint (also check Google Drive example) is a wonderful practical example of how to visualize the entire customer experience for two personas, across eight touch points, with user policies, UI screens and all interactions with the customer service — all on one single page.

Now, unlike AirBnB, your product might not need a mapping against user policies. However, it might need other lanes that would be more relevant for your team. For example, include relevant findings and recommendations from UX research. List key actions needed for the next stage. Include relevant UX metrics and unsuccessful touchpoints.

Whatever works for you, works for you — just make sure to avoid assumptions and refer to facts and insights from research.

Spotify Customer Journey Map

Spotify Customer Journey Blueprint (high resolution) breaks down customer experiences by distinct user profiles, and for each includes mobile and desktop views, pain points, thoughts, and actions. Also, notice branches for customers who skip authentication or admin tasks.

Getting Started With Journey Maps

To get started with user journey maps, we first choose a lens: Are we reflecting the current state or projecting a future state? Then, we choose a customer who experiences the journey — and we capture the situation/goals that they are focusing on.

Next, we list high-level actions users are going through. We start by defining the first and last stages and fill in between. Don’t get too granular: list key actions needed for the next stage. Add the user’s thoughts, feelings, sentiments, and emotional curves.

Eventually, add user’s key touchpoints with people, services, tools. Map user journey across mobile and desktop screens. Transfer insights from other research (e.g., customer support). Fill in stage after stage until the entire map is complete.

Then, identify pain points and highlight them with red dots. Add relevant jobs-to-be-done, metrics, channels if needed. Attach links to quotes, photos, videos, prototypes, Figma files. Finally, explore ideas and opportunities to address pain points.

Free Customer Journey Maps Templates (Miro, Figma)

You don’t have to reinvent the wheel from scratch. Below, you will find a few useful starter kits to get up and running fast. However, please make sure to customize these templates for your needs, as every product will require its own specific details, dependencies, and decisions.

Wrapping Up

Keep in mind that customer journeys are often non-linear, with unpredictable entry points and integrations way beyond the final stage of a customer journey map. It’s in those moments when things leave a perfect path that a product’s UX is actually stress-tested.

So consider mapping unsuccessful touchpoints as well — failures, error messages, conflicts, incompatibilities, warnings, connectivity issues, eventual lock-outs and frequent log-outs, authentication issues, outages, and urgent support inquiries.

Also, make sure to question assumptions and biases early. Once they live in your UX map, they grow roots — and it might not take long until they are seen as the foundation of everything, which can be remarkably difficult to challenge or question later. Good luck, everyone!

Meet Smart Interface Design Patterns

If you are interested in UX and design patterns, take a look at Smart Interface Design Patterns, our 10h-video course with 100s of practical examples from real-life projects — with a live UX training later this year. Everything from mega-dropdowns to complex enterprise tables — with 5 new segments added every year. Jump to a free preview. Use code BIRDIE to save 15% off.

Meet Smart Interface Design Patterns, our video course on interface design & UX.

100 design patterns & real-life examples.
10h-video course + live UX training. Free preview.

]]>
hello@smashingmagazine.com (Vitaly Friedman)
<![CDATA[Tales Of An Eternal Summer (July 2024 Wallpapers Edition)]]> https://smashingmagazine.com/2024/06/desktop-wallpaper-calendars-july-2024/ https://smashingmagazine.com/2024/06/desktop-wallpaper-calendars-july-2024/ Sun, 30 Jun 2024 08:00:00 GMT For many of us, July is the epitome of summer. The time for spending every free minute outside to enjoy the sun and those seemingly endless summer days, be it in a nearby park, by a lake, or on a trip exploring new places. So why not bring a bit of that summer joy to your desktop, too?

For this month’s wallpapers post, artists and designers from across the globe once again tickled their creativity to capture the July feeling in a collection of desktop wallpapers. They all come in versions with and without a calendar for July 2024 and can be downloaded for free — as it has been a Smashing tradition for more than 13 years already. A huge thank-you to everyone who submitted their artworks this month — this post wouldn’t exist without you!

As a little bonus goodie, we also compiled a selection of July favorites from our wallpapers archives at the end of this post. So maybe you’ll discover one of your almost-forgotten favorites in here, too? Have a fantastic July, no matter what your plans are!

  • You can click on every image to see a larger preview,
  • We respect and carefully consider the ideas and motivation behind each and every artist’s work. This is why we give all artists the full freedom to explore their creativity and express emotions and experience through their works. This is also why the themes of the wallpapers weren’t anyhow influenced by us but rather designed from scratch by the artists themselves.
  • Submit a wallpaper!
    Did you know that you could get featured in our next wallpapers post, too? We are always looking for creative talent.

Diving Among Corals

“The long-awaited vacation is coming closer. After working all year, we find ourselves with months that, although we don’t stop completely, are lived differently. We enjoy the days and nights more, and if we can, the beach will keep us company. Therefore, we’ll spend this month in Australia, enjoying the coral reefs and diving without limits.” — Designed by Veronica Valenzuela from Spain.

Level Up

“Join gamers worldwide on National Video Game Day to honor the rich history and vibrant culture of gaming. Enjoy exclusive discounts on top titles, participate in exciting online tournaments, and dive into special events featuring your favorite games. Whether you're a casual player or a dedicated enthusiast, there’s something for everyone to celebrate on this epic day!” — Designed by PopArt Studio from Serbia.

Bigfoot And The Little Girl

“This heartwarming moment captures an unlikely friendship of a gentle Bigfoot and an adorable little girl set against the backdrop of a magical and serene evening in nature.” — Designed by Reethu M from London.

Good Night

Designed by Ricardo Gimenes from Sweden.

Full Buck Moon

“July is the month of the full buck moon, named after the fact that many deer regrow their antlers around this time. It is also when the United States celebrate their Independence Day with fireworks and fun. I decided to combine these aspects into a magical encounter during the fourth of July. It takes place in a field of larkspur which is a flower associated with July.” — Designed by Quincy van Geffen from the Netherlands.

No More Hugs

Designed by Ricardo Gimenes from Sweden.

Celebrating World Chocolate Day

“World Chocolate Day, celebrated on July 7th, invites chocolate lovers worldwide to indulge in their favorite treat. Commemorating chocolate’s introduction to Europe, this day celebrates its global popularity. Enjoy dark, milk, or white chocolate, bake delicious desserts, and share the sweetness with loved ones.” — Designed by Reethu M from London.

Birdie July

Designed by Lívi Lénárt from Hungary.

Summer Cannonball

“Summer is coming in the northern hemisphere and what better way to enjoy it than with watermelons and cannonballs.” — Designed by Maria Keller from Mexico.

In Space

Designed by Lieke Dol from the Netherlands.

A Flamboyance Of Flamingos

“July in South Africa is dreary and wintery so we give all the southern hemisphere dwellers a bit of color for those gray days. And for the northern hemisphere dwellers a bit of pop for their summer!” — Designed by Wonderland Collective from South Africa.

Eternal Summer

“And once you let your imagination go, you find yourself surrounded by eternal summer, unexplored worlds, and all-pervading warmth, where there are no rules of physics and colors tint the sky under your feet.” — Designed by Ana Masnikosa from Belgrade, Serbia.

Day Turns To Night

Designed by Xenia Latii from Germany.

Tropical Lilies

“I enjoy creating tropical designs. They fuel my wanderlust and passion for the exotic, instantaneously transporting me to a tropical destination.” — Designed by Tamsin Raslan from the United States.

Road Trip In July

“July is the middle of summer, when most of us go on road trips, so I designed a calendar inspired by my love of traveling and summer holidays.” — Designed by Patricia Coroi from Romania.

The Ancient Device

Designed by Ricardo Gimenes from Sweden.

Taste Like Summer

“In times of clean eating and the world of superfoods there is one vegetable missing. An old, forgotten one. A flower actually. Rare and special. Once it had a royal reputation (I cheated a bit with the blue). The artichocke — this is my superhero in the garden! I am a food lover — you too? Enjoy it — dip it!” — Designed by Alexandra Tamgnoué from Germany.

Island River

“Make sure you have a refreshing source of ideas, plans and hopes this July. Especially if you are to escape from urban life for a while.” — Designed by Igor Izhik from Canada.

Cactus Hug

Designed by Ilaria Bagnasco from Italy.

Under The Enchanting Moonlight

“Two friends sat under the enchanting moonlight, enjoying the serene ambiance as they savoured their cups of tea. It was a rare and precious connection that transcended the ordinary, kindled by the magic of the moonlight. Eventually, as the night began to wane, they reluctantly stood, their empty cups in hand. They carried with them the memories and the tranquility of the moonlit tea session, knowing that they would return to this special place to create new memories in the future.” — Designed by Bhabna Basak from India.

DJ Little Bird

Designed by Ricardo Gimenes from Sweden.

Heated Mountains

“Warm summer weather inspired the color palette.” — Designed by Marijana Pivac from Croatia.

July Flavor

Designed by Natalia Szendzielorz from Poland.

Summer Heat

Designed by Xenia Latii from Berlin, Germany.

Mason Jar

“Make the days count this summer!” — Designed by Meghan Pascarella from the United States.

Summer Essentials

“A few essential items for the summertime weather at the beach, park, and everywhere in-between.” — Designed by Zach Vandehey from the United States.

Captain Amphicar

“My son and I are obsessed with the Amphicar right now, so why not have a little fun with it?” — Designed by 3 Bicycles Creative from the United States.

Hotdog

Designed by Ricardo Gimenes from Sweden.

Less Busy Work, More Fun!

Designed by ActiveCollab from the United States.

Sweet Summer

“In summer everything inspires me.” — Designed by Maria Karapaunova from Bulgaria.

Fire Camp

“What’s better than a starry summer night with an (unexpected) friend around a fire camp with some marshmallows? Happy July!” — Designed by Etienne Mansard from the UK.

Riding In The Drizzle

“Rain has come, showering the existence with new seeds of life. Everywhere life is blooming, as if they were asleep and the falling music of raindrops have awakened them. Feel the drops of rain. Feel this beautiful mystery of life. Listen to its music, melt into it.” — Designed by DMS Software from India.

An Intrusion Of Cockroaches

“Ever watched Joe’s Apartment when you were a kid? Well, that movie left a soft spot in my heart for the little critters. Don’t get me wrong: I won’t invite them over for dinner, but I won’t grab my flip flop and bring the wrath upon them when I see one running in the house. So there you have it… three roaches… bringing the smack down on that pesky human… ZZZZZZZAP!!” — Designed by Wonderland Collective from South Africa.

July Rocks!

Designed by Joana Moreira from Portugal.

Frogs In The Night

“July is coming and the nights are warmer. Frogs look at the moon while they talk about their day.” — Designed by Veronica Valenzuela from Spain.

]]>
hello@smashingmagazine.com (Cosima Mielke)
<![CDATA[How To Improve Your Microcopy: UX Writing Tips For Non-UX Writers]]> https://smashingmagazine.com/2024/06/how-improve-microcopy-ux-writing-tips-non-ux-writers/ https://smashingmagazine.com/2024/06/how-improve-microcopy-ux-writing-tips-non-ux-writers/ Fri, 28 Jun 2024 10:00:00 GMT Throughout my UX writing career, I’ve held many different roles: a UX writer in a team of UX writers, a solo UX writer replacing someone who left, the first and only UX writer at a company, and even a teacher at a UX writing course, where I reviewed more than 100 home assignments. And oh gosh, what I’ve seen.

Crafting microcopy is not everyone’s strong suit, and it doesn’t have to be. Still, if you’re a UX designer, product manager, analyst, or marketing content writer working in a small company, on an MVP, or on a new product, you might have to get by without a UX writer. So you have the extra workload of creating microcopy. Here are some basic rules that will help you create clear and concise copy and run a quick health check on your designs.

Ensure Your Interface Copy Is Role-playable
Why it’s important:
  • To create a friendly, conversational experience;
  • To work out a consistent interaction pattern.

When crafting microcopy, think of the interface as a dialog between your product and your user, where:

  • Titles, body text, tooltips, and so on are your “phrases.”
  • Button labels, input fields, toggles, menu items, and other elements that can be tapped or selected are the user’s “phrases.”

Ideally, you should be able to role-play your interface copy: a product asks the user to do something — the user does it; a product asks for information — the user types it in or selects an item from the menu; a product informs or warns the user about something — the user takes action.

For example, if your screen is devoted to an event and the CTA is for the user to register, you should opt for a button label like “Save my spot” rather than “Save your spot.” This way, when a user clicks the button, it’s as if they are pronouncing the phrase themselves, which resonates with their thoughts and intentions.

Be Especially Transparent And Clear When It Comes To Sensitive Topics
Why it’s important: To build trust and loyalty towards your product.

Some topics, such as personal data, health, or money, are extremely sensitive for people. If your product involves any limitations, peculiarities, or possible negative outcomes related to these sensitive topics, you should convey this information clearly and unequivocally. You will also need to collaborate with your UX/UI Designer closely to ensure you deliver this information in a timely manner and always make it visible without requiring the user to take additional actions (e.g., don’t hide it in tooltips that are only shown by tapping).

Here’s a case from my work experience. For quite some time, I’ve been checking homework assignments for a UX writing course. In this course, all the tasks have revolved around an imaginary app for dog owners. One of the tasks students worked on was creating a flow for booking a consultation with a dog trainer. The consultation had to be paid in advance. In fact, the money was blocked on the user’s bank card and charged three hours before the consultation. That way, a user could cancel the meeting for free no later than three hours before the start time. A majority of the students added this information as a tooltip on the checkout screen; if a user didn’t tap on it, they wouldn’t be warned about the possibility of losing money.

In a real-life situation, this would cause immense negativity from users: they may post about it on social media, and it will show the company in a bad light. Even if you occasionally resort to dark patterns, make sure you can afford any reputational risks.

So, when creating microcopy on sensitive topics:

  • Be transparent and honest about all the processes and conditions. For example, you’re a fintech service working with other service providers. Because of that, you have fees built into transactions but don’t know the exact amount. Explain to users how the fees are calculated, their approximate range (if possible), and where users can see more precise info.
  • Reassure users that you’ll be extremely careful with their data. Explain why you need their data, how you will use it, store and protect it from breaches, and so on.
  • If some restrictions or limitations are implied, provide options to remove them (if possible).

Ensure That The Button Label Accurately Reflects What Happens Next
Why it’s important:
  • To make your interface predictable, trustworthy, and reliable;
  • To prevent user frustration.

The button label should reflect the specific action that occurs when the user clicks or taps it.

It might seem valid to use a button label that reflects the user’s goal or target action, even if it actually happens a bit later. For example, if your product allows users to book accommodations for vacations or business trips, you might consider using a “Book now” button in the booking flow. However, if tapping it leads the user to an order screen where they need to select a room, fill out personal details, and so on, the accommodation is not booked immediately. So you might want to opt for “Show rooms,” “Select a rate,” or another button label that better reflects what happens next.

Moreover, labels like “Buy now” or “Book now” might seem too pushy and even off-putting (especially when it comes to pricey products involving a long decision-making process), causing users to abandon your website or app in favor of ones with buttons that create the impression they can browse peacefully for as long as they need. You might want to let your users “Explore,” “Learn more,” “Book a call,” or “Start a free trial” first.

As a product manager or someone with a marketing background, you might want to create catchy and fancy button labels to boost conversion rates. For instance, when working on an investment app, you might label a button for opening a brokerage account as “Become an investor.” While this might appeal to users’ egos, it can also come across as pretentious and cheap. Additionally, after opening an account, users may still need to do many things to actually become investors, which can be frustrating. Opt for a straightforward “Open an account” button instead.

In this regard, it’s better not to promise users things that we can’t guarantee or that aren’t entirely up to us. For example, in a flow that includes an OTP password, it’s better to opt for the “Send a code” button rather than “Get a code” since we can’t guarantee there won’t be any network outages or other issues preventing the user from receiving an SMS or a push notification.

Finally, avoid using generic “Yes” or “No” buttons as they do not clearly reflect what happens next. Users might misread the text above or fail to notice a negation, leading to unexpected outcomes. For example, when asking for a confirmation, such as “Are you sure you want to quit?” you might want to go with button labels like “Quit” and “Stay” rather than just “Yes” and “No.”

Tip: If you have difficulty coming up with a button label, this may be a sign that the screen is poorly organized or the flow lacks logic and coherence. For example, a user has to deal with too many different entities and types of tasks on one screen, so the action can’t be summarized with just one verb. Or perhaps a subsequent flow has a lot of variations, making it hard to describe the action a user should take. In such cases, you might want to make changes to the screen (say, break it down into several screens) or the flow (say, add a qualifying question or attribute earlier so that the flow would be less branching).
Make It Clear To The User Why They Need To Perform The Action
Why it’s important:
  • To create transparency and build trust;
  • To boost conversion rates.

An ideal interface is self-explanatory and needs no microcopy. However, sometimes, we need to convince users to do something for us, especially when it involves providing personal information or interacting with third-party products.

You can use the following formula: “To [get this], do [this] + UI element to make it happen.” For example, “To get your results, provide your email,” followed by an input field.

It’s better to provide the reasoning (“to get your results”) first and then the instructions (“provide your email” ): this way, the guidance is more likely to stick in the user’s memory, smoothly leading to the action. If you reverse the order — giving the instructions first and then the reasoning — the user might forget what they need to do and will have to reread the beginning of the sentence, leading to a less smooth and slightly hectic experience.

Ensure The UI Element Copy Doesn’t Explain How To Interact With This Very Element
Why it’s important:
  • If you need to explain how to interact with a UI element, it may be a sign that the interface is not intuitive;
  • Risk omitting or not including more important, useful text.

Every now and then, I come across meaningless placeholders or excessive toggle copy that explains how to interact with the field or toggle. The most frequent example is the “Search” placeholder for a search field. Occasionally, I see button labels like “Press to continue.”

Mobile and web interfaces have been around for quite a while, and users understand how to interact with buttons, toggles, and fields. Therefore, explanations such as “click,” “tap,” “enter,” and so on seem excessive in most cases. Perhaps it’s only with a group of checkboxes that you might add something like “Select up to 5.”

You might want to add something more useful. For example, instead of a generic “Search” placeholder for a search field, use specific instances a user might type in. If you’re a fashion marketplace, try placeholders like “oversized hoodies,” “women’s shorts,” and so on. Keep in mind the specifics of your website or app: ensure the placeholder is neither too broad nor too specific, and if a user types something like you’ve provided, their search will be successful.

Stick To The Rule “1 Microcopy Item = 1 Idea”
Why it’s important:
  • Not to create extra cognitive load, confusion, or friction;
  • To ensure a smooth and simple experience.

Users have short attention spans, scan text instead of reading it thoroughly, and can’t process multiple ideas simultaneously. That’s why it’s crucial to break information down into easily digestible chunks instead of, for example, trying to squeeze all the restrictions into one tooltip.

The golden rule is to provide users only with the information they need at this particular stage to take a specific action or make a decision.

You’ll need to collaborate closely with your designer to ensure the information is distributed over the screen evenly and you don’t overload one design element with a lot of text.

Be Careful With Titles Like “Done,” “Almost There,” “Attention,” And So On
Why it’s important:
  • Not to annoy a user;
  • To be more straightforward and economical with users’ time;
  • Not to overuse their attention;
  • Not to provoke anxiety.

Titles, written in bold and larger font sizes, grab users’ attention. Sometimes, titles are the only text users actually read. Titles stick better in their memory, so they must be understandable as a standalone text.

Titles like “One more thing” or “Almost there” might work well if they align with a product’s tone of voice and the flows where they appear are cohesive and can hardly be interrupted. But keep in mind that users might get distracted.

Use this quick check: set your design aside for about 20 minutes, do something else, and then open only the screen for which you’re writing a title. Is what happens on this screen still understandable from the title? Do you easily recall what has or hasn’t happened, what you were doing, and what should be done next?

Don’t Fall Back On Abstract Examples
Why it’s important:
  • To make the interface more precise and useful;
  • To ease the navigation through the product for a user;
  • To reduce cognitive load.

Some products (e.g., any B2B or financial ones) involve many rules and restrictions that must be explained to the user. To make this more understandable, use real-life examples (with specific numbers, dates, and so on) rather than distilling abstract information into a hint, tooltip, or bottom sheet.

It’s better to provide explanations using real-life examples that users can relate to. Check with engineers if it’s possible to get specific data for each user and add variables and conditions to show every user the most relevant microcopy. For example, instead of saying, “Your deposit limit is $1,000 per calendar month,” you could say, “Until Jan 31, you can deposit $400 more.” This relieves the user of unnecessary work, such as figuring out the start date of the calendar month in their case and calculating the remaining amount.

Try To Avoid Negatives
Why it’s important:
  • Not to increase cognitive load;
  • To prevent friction.

As a rule of thumb, it’s recommended to avoid double negatives, such as “Do not unfollow.” However, I’d go further and advise avoiding single negatives as well. The issue is that to decipher such a message, a user has to perform an excessive logical operation: first eliminating the negation, then trying to understand the gist.

For example, when listing requirements for a username, saying “Don’t use special characters, spaces, or symbols” forces a user to speculate (“If this is not allowed, then the opposite is allowed, which must be…”). It can take additional time to figure out what falls under “special characters.” To simplify the task for the user, opt for something like “Use only numbers and letters.”

Moreover, a user can easily overlook the “not” part and misread the message.

Another aspect worth noting is that negation often seems like a restriction or prohibition, which nobody likes. In some cases, especially in finance, all those don’ts might be perceived with suspicion rather than as precaution.

Express Action With Verbs, Not Nouns
Why it’s important:
  • To avoid wordiness;
  • To make text easily digestible.

When describing an action, use a verb, not a noun. Nouns that convey the meaning of verbs make texts harder to read and give off a legalistic vibe.

Here are some sure signs you need to paraphrase your text for brevity and simplicity:

  • Forms of “be” as the main verbs;
  • Noun phrases with “make” (e.g., “make a payment/purchase/deposit”);
  • Nouns ending in -tion, -sion, -ment, -ance, -ency (e.g., cancellation);
  • Phrases with “of” (e.g., provision of services);
  • Phrases with “process” (e.g., withdrawal process).
Make Sure You Use Only One Term For Each Entity
Why it’s important: Not to create extra cognitive load, confusion, and anxiety.

Ensure you use the same term for the same object or action throughout the entire app. For example, instead of using “account” and “profile” interchangeably, choose one and stick to it to avoid confusing your users.

The more complicated and/or regulated your product is, the more vital it is to choose precise wording and ensure it aligns with legal terms, the wording users see in the help center, and communication with support agents.

Less “Oopsies” In Error Messages
Why it’s important:
  • Not to annoy a user;
  • To save space for more important information.

At first glance, “Oops” may seem sweet and informal (yet with an apologetic touch) and might be expected to decrease tension. However, in the case of repetitive or serious errors, the effect will be quite the opposite.

Use “Oops” and similar words only if you’re sure it suits your brand’s tone of voice and you can finesse it.

As a rule of thumb, good error messages explain what has happened or is happening, why (if we know the reason), and what the user should do. Additionally, include any sensitive information related to the process or flow where the error appears. For example, if an error occurs during the payment process, provide users with information concerning their money.

No Excessive Politeness
Why it’s important: Not to waste precious space on less critical information.

I’m not suggesting we remove every single “please” from the microcopy. However, when it comes to interfaces, our priority is to convey meaning clearly and concisely and explain to users what to do next and why. Often, if you start your microcopy with “please,” you won’t have enough space to convey the essence of your message. Users will appreciate clear guidelines to perform the desired action more than a polite message they struggle to follow.

Remove Tech Jargon
Why it’s important:
  • To make the interface understandable for a broad audience;
  • To avoid confusion and ensure a frictionless experience.

As tech specialists, we’re often subject to the curse of knowledge, and despite our efforts to prioritize users, tech jargon can sneak into our interface copy. Especially if our product targets a wider audience, users may not be tech-savvy enough to understand terms like “icon.”

To ensure your interface doesn’t overwhelm users with professional jargon, a quick and effective method is to show the interface to individuals outside your product group. If that’s not feasible, here’s how to identify jargon: it’s the terminology you use in daily meetings among yourselves or in Jira task titles (e.g., authorization, authentication, and so on), or abbreviations (e.g., OTP code, KYC process, AML rules, and so on).

Ensure That Empty State Messages Don’t Leave Users Frustrated
Why it’s important:
  • For onboarding and navigation;
  • To increase discoverability of particular features;
  • To promote or boost the use of the product;
  • To reduce cognitive load and anxiety about the next steps.

Quite often, a good empty state message is a self-destructing one, i.e. one that helps a user to get rid of this emptiness. An empty state message shouldn’t just state “there’s nothing here” — that’s obvious and therefore unnecessary. Instead, it should provide users with a way out, smoothly guiding them into using the product or a specific feature. A well-crafted empty message can even boost conversions.

Of course, there are exceptions, for example, in a reactive interface like a CRM system for a restaurant displaying the status of orders to workers. If there are no orders in progress and, therefore, no corresponding empty state message, you can’t nudge or motivate restaurant workers to create new orders themselves.

Place All Important Information At The Beginning
Why it’s important:
  • To keep the user focused;
  • Not to overload a user with info;
  • Avoid information loss due to fading or cropping.

As mentioned earlier, users have short attention spans and often don’t want to focus on the texts they read, especially microcopy. Therefore, ensure you place all necessary information at the beginning of your text. Omit lead-ins, introductory words, and so on. Save less vital details for later in the text.

Ensure Title And Buttons Are Understandable Without Body Text
Why it’s important:
  • For clarity;
  • To overcome the serial position effect;
  • To make sure the interface, the flow, and the next steps are understandable for a user even if they scan the text instead of reading.

There’s a phenomenon called the serial position effect: people tend to remember information better if it’s at the beginning or end of a text or sentence, often overlooking the middle part. When it comes to UX/UI design, this effect is reinforced by the visual hierarchy, which includes the bigger font size of the title and the accentuated buttons. What’s more, the body text is often longer, which puts it at risk of being missed. Since users tend to scan rather than read, ensure your title and buttons make sense even without the body text.

Wrapping up

Trying to find the balance between providing a user with all the necessary explanations, warnings, and reasonings on one hand and keeping the UI intuitive and frictionless on the other hand is a tricky task.

You can facilitate the process of creating microcopy with the help of ChatGPT and AI-based Figma plugins such as Writer or Grammarly. But beware of the limitations these tools have as of now.

For instance, creating a prompt that includes all the necessary details and contexts can take longer than actually writing a title or a label on your own. Grammarly is a nice tool to check the text for typos and mistakes, but when it comes to microcopy, its suggestions might be a bit inaccurate or confusing: you might want to, say, omit articles for brevity or use elliptical sentences, and Grammarly will identify it as a mistake.

You’ll still need a human eye to evaluate the microcopy &mdahs; and I hope this checklist will come in handy.

Microcopy Checklist

General

✅ Microcopy is role-playable (titles, body text, tooltips, etc., are your “phrases”; button labels, input fields, toggles, menu items, etc. are the user’s “phrases”).

Information presentation & structure

✅ The user has the exact amount of information they need right now to perform an action — not less, not more.
✅ Important information is placed at the beginning of the text.
✅ It’s clear to the user why they need to perform the action.
✅ Everything related to sensitive topics is always visible and static and doesn’t require actions from a user (e.g., not hidden in tooltips).
✅ You provide a user with specific information rather than generic examples.
✅ 1 microcopy item = 1 idea.
✅ 1 entity = 1 term.
✅ Empty state messages provide users with guidelines on what to do (when possible and appropriate).

Style

✅ No tech jargon.
✅ No excessive politeness, esp. at the expense of meaning.
✅ Avoid or reduce the use of “not,” “un-,” and other negatives.
✅ Actions are expressed with verbs, not nouns.

Syntax

✅ UI element copy doesn’t explain how to interact with this very element.
✅ Button label accurately reflects what happens next.
✅ Fewer titles like “done,” “almost there,” and “attention.”
✅ “Oopsies” in error messages are not frequent and align well with the brand’s tone of voice.
✅ Title and buttons are understandable without body text.

Headings

✅ The main article heading is an h1 — a level 1 heading. Use level 2 and level 3 headings to break up your text.

Images

✅ Standard images need to be at least 800px wide. We also have the option of full-width images for very detailed views. If using Dropbox Paper or Google Docs, add your images inline, but please also add them to the list at the end of this template so we can be sure we have them all in high-resolution format.

]]>
hello@smashingmagazine.com (Irina Silyanova)
<![CDATA[How To Make A Strong Case For Accessibility]]> https://smashingmagazine.com/2024/06/how-make-strong-case-accessibility/ https://smashingmagazine.com/2024/06/how-make-strong-case-accessibility/ Wed, 26 Jun 2024 12:00:00 GMT Getting support for accessibility efforts isn’t easy. There are many accessibility myths, wrong assumptions, and expectations that make accessibility look like a complex, expensive, and time-consuming project. Let’s fix that!

Below are some practical techniques that have been working well for me to convince stakeholders to support and promote accessibility in small and large companies.

This article is part of our ongoing series on UX. You might want to take a look at Smart Interface Design Patterns 🍣 and the upcoming live UX training as well. Use code BIRDIE to save 15% off.

Launching Accessibility Efforts

A common way to address accessibility is to speak to stakeholders through the lens of corporate responsibility and ethical and legal implications. Personally, I’ve never been very successful with this strategy. People typically dismiss concerns that they can’t relate to, and as designers, we can’t build empathy with facts, charts, or legal concerns.

The problem is that people often don’t know how accessibility applies to them. There is a common assumption that accessibility is dull and boring and leads to “unexciting” and unattractive products. Unsurprisingly, businesses often neglect it as an irrelevant edge case.

So, I use another strategy. I start conversations about accessibility by visualizing it. I explain the different types of accessibility needs, ranging from permanent to temporary to situational — and I try to explain what exactly it actually means to our products. Mapping a more generic understanding of accessibility to the specifics of a product helps everyone explore accessibility from a point that they can relate to.

And then I launch a small effort — just a few usability sessions, to get a better understanding of where our customers struggle and where they might be blocked. If I can’t get access to customers, I try to proxy test via sales, customer success, or support. Nothing is more impactful than seeing real customers struggling in their real-life scenario with real products that a company is building.

From there, I move forward. I explain inclusive design, accessibility, neurodiversity, EAA, WCAG, ARIA. I bring people with disabilities into testing as we need a proper representation of our customer base. I ask for small commitments first, then ask for more. I reiterate over and over and over again that accessibility doesn’t have to be expensive or tedious if done early, but it can be very expensive when retrofitted or done late.

Throughout that entire journey, I try to anticipate objections about costs, timing, competition, slowdowns, dullness — and keep explaining how accessibility can reduce costs, increase revenue, grow user base, minimize risks, and improve our standing in new markets. For that, I use a few templates that I always keep nearby just in case an argument or doubts arise.

Useful Templates To Make A Strong Case For Accessibility

1. “But Accessibility Is An Edge Case!”

❌ “But accessibility is an edge case. Given the state of finances right now, unfortunately, we really can’t invest in it right now.”

🙅🏽♀️ “I respectfully disagree. 1 in 6 people around the world experience disabilities. In fact, our competitors [X, Y, Z] have launched accessibility efforts ([references]), and we seem to be lagging behind. Plus, it doesn’t have to be expensive. But it will be very expensive once we retrofit much later.”

2. “But There Is No Business Value In Accessibility!”

❌ “We know that accessibility is important, but at the moment, we need to focus on efforts that will directly benefit business.”

🙅🏼♂️ “I understand what you are saying, but actually, accessibility directly benefits business. Globally, the extended market is estimated at 2.3 billion people, who control an incremental $6.9 trillion in annual disposable income. Prioritizing accessibility very much aligns with your goal to increase leads, customer engagement, mitigate risk, and reduce costs.” (via Yichan Wang)

3. “But We Don’t Have Disabled Users!”

❌ “Why should we prioritize accessibility? Looking at our data, we don’t really have any disabled users at all. Seems like a waste of time and resources.”

🙅♀️ “Well, if a product is inaccessible, users with disabilities can’t and won’t be using it. But if we do make our product more accessible, we open the door for prospect users for years to come. Even small improvements can have a high impact. It doesn’t have to be expensive nor time-consuming.”

4. “Screen Readers Won’t Work With Our Complex System!”

❌ “Our application is very complex and used by expert users. Would it even work at all with screen readers?”

🙅🏻♀️ “It’s not about designing only for screen readers. Accessibility can be permanent, but it can also be temporary and situational — e.g., when you hold a baby in your arms or if you had an accident. Actually, it’s universally useful and beneficial for everyone.”

5. “We Can’t Win Market With Accessibility Features!”

❌ “To increase our market share, we need features that benefit everyone and improve our standing against competition. We can’t win the market with accessibility.”

🙅🏾♂️ “Modern products succeed not by designing more features, but by designing better features that improve customer’s efficiency, success rate, and satisfaction. And accessibility is one of these features. For example, voice control and auto-complete were developed for accessibility but are now widely used by everyone. In fact, the entire customer base benefits from accessibility features.”

6. “Our Customers Can’t Relate To Accessibility Needs”

❌ “Our research clearly shows that our customers are young and healthy, and they don't have accessibility needs. We have other priorities, and accessibility isn’t one of them.”

🙅♀️ “I respectfully disagree. People of all ages can have accessibility needs. In fact, accessibility features show your commitment to inclusivity, reaching out to every potential customer of any age, regardless of their abilities.

This not only resonates with a diverse audience but also positions your brand as socially responsible and empathetic. As you know, our young user base increasingly values corporate responsibility, and this can be a significant differentiator for us, helping to build a loyal customer base for years to come.” (via Yichan Wang)

7. “Let’s Add Accessibility Later”

❌ “At the moment, we need to focus on the core features of our product. We can always add accessibility later once the product is more stable.”

🙅🏼 “I understand concerns about timing and costs. However, it’s important to note that integrating accessibility from the start is far more cost-effective than retrofitting it later. If accessibility is considered after development is complete, we will face significant additional expenses for auditing accessibility, followed by potentially extensive work involving a redesign and redevelopment.

This process can be significantly more expensive than embedding accessibility from the beginning. Furthermore, delaying accessibility can expose your business to legal risks. With the increasing number of lawsuits for non-compliance with accessibility standards, the cost of legal repercussions could far exceed the expense of implementing accessibility now. The financially prudent move is to work on accessibility now.”

You can find more useful ready-to-use templates in Yichan Wang’s Designer’s Accessibility Advocacy Toolkit — a fantastic resource to keep nearby.

Building Accessibility Practices From Scratch

As mentioned above, nothing is more impactful than visualizing accessibility. However, it requires building accessibility research and accessibility practices from scratch, and it might feel like an impossible task, especially in large corporations. In “How We’ve Built Accessibility Research at Booking.com”, Maya Alvarado presents a fantastic case study on how to build accessibility practices and inclusive design into UX research from scratch.

Maya rightfully points out that automated accessibility testing alone isn’t reliable. Compliance means that a user can use your product, but it doesn’t mean that it’s a great user experience. With manual testing, we make sure that customers actually meet their goals and do so effectively.

Start by gathering colleagues and stakeholders interested in accessibility. Document what research was done already and where the gaps are. And then whenever possible, include 5–12 users with disabilities in accessibility testing.

Then, run a small accessibility initiative around key flows. Tap into critical touch points and research them. As you are making progress, extend to components, patterns, flows, and service design. And eventually, incorporate inclusive sampling into all research projects — at least 15% of usability testers should have a disability.

Companies often struggle to recruit testers with disabilities. One way to find participants is to reach out to local chapters, local training centers, non-profits, and public communities of users with disabilities in your country. Ask the admin’s permission to post your research announcement, and it won’t be rejected. If you test on site, add extra $25–$50 depending on disability transportation.

I absolutely love the idea of extending Microsoft's Inclusive Design Toolkit to meet specific user needs of a product. It adds a different dimension to disability considerations which might be less abstract and much easier to relate for the entire organization.

As Maya noted, inclusive design is about building a door that can be opened by anyone and lets everyone in. Accessibility isn’t a checklist — it’s a practice that goes beyond compliance. A practice that involves actual people with actual disabilities throughout all UX research activities.

Wrapping Up

To many people, accessibility is a big mystery box. They might have never seen a customer with disabilities using their product, and they don’t really understand what it involves and requires. But we can make accessibility relatable, approachable, and visible by bringing accessibility testing to our companies — even if it’s just a handful of tests with people with disabilities.

No manager really wants to deliberately ignore the needs of their paying customers — they just need to understand these needs first. Ask for small commitments, and get the ball rolling from there.

Set up an accessibility roadmap with actions, timelines, roles and goals. Frankly, this strategy has been working for me much better than arguing about legal and moral obligations, which typically makes stakeholders defensive and reluctant to commit.

Fingers crossed! And a huge thank-you to everyone working on and improving accessibility in your day-to-day work, often without recognition and often fueled by your own enthusiasm and passion — thank you for your incredible work in pushing accessibility forward! 👏🏼👏🏽👏🏾

Useful Resources

Making A Case For Accessibility

Accessibility Testing

Meet Smart Interface Design Patterns

If you are interested in UX and design patterns, take a look at Smart Interface Design Patterns, our 10h-video course with 100s of practical examples from real-life projects — with a live UX training later this year. Everything from mega-dropdowns to complex enterprise tables — with 5 new segments added every year. Jump to a free preview. Use code BIRDIE to save 15% off.

Meet Smart Interface Design Patterns, our video course on interface design & UX.

100 design patterns & real-life examples.
10h-video course + live UX training. Free preview.

]]>
hello@smashingmagazine.com (Vitaly Friedman)
<![CDATA[So Your Website Or App Is Live… Now What?]]> https://smashingmagazine.com/2024/06/website-improvement-after-launch/ https://smashingmagazine.com/2024/06/website-improvement-after-launch/ Mon, 24 Jun 2024 18:00:00 GMT Whether you’ve launched a redesign of your website or rolled out a new feature in your app, that is the point where people normally move on to the next project. But, that is a mistake.

It’s only once a site, app, or feature goes live that we get to see actual users interacting with it in a completely natural way. It’s only then that we know if it has succeeded or failed.

Not that things are ever that black and white. Even if it does seem successful, there’s always room for improvement. This is particularly true with conversion rate optimization. Even small tweaks can lead to significant increases in revenue, leads, or other key metrics.

Want to learn more on testing and improving your website? Join Paul Boag in his upcoming live workshop on Fast and Budget-Friendly User Research and Testing, starting July 11.

Making Time for Post-Launch Iteration

The key is to build in time for post-launch optimization from the very beginning. When you define your project timeline or sprint, don’t equate launch with the end. Instead, set the launch of the new site, app, or feature about two-thirds of the way through your timeline. This leaves time after launch for monitoring and iteration.

Better still, divide your team’s time into two work streams. One would focus on “innovation” — rolling out new features or content. The second would focus on “optimization” and improving what is already online.

In short, do anything you can to ring-fence at least some time for optimizing the experience post-launch.

Once you’ve done that, you can start identifying areas in your site or app that are underperforming and could do with improvement.

Identifying Problem Points

This is where analytics can help. Look for areas with high bounce rates or exit points. Users are dropping off at these points. Also, look for low-performing conversion points. But don’t forget to consider this as a percentage of the traffic the page or feature gets. Otherwise, your most popular pages will always seem like the biggest problem.

To be honest, this is more fiddly than it should be in Google Analytics 4, so if you’re not familiar with the platform you might need some help.

Not that Google Analytics is the only tool that can help; I also highly recommend Microsoft Clarity. This free tool provides detailed user data. It includes session recordings and heatmaps. These help you find where to improve on your website or app.

Play particular attention to “insights” which will show you metrics including:

  • Rage clicks
    Where people repeatedly click something out of frustration.
  • Dead clicks
    Where people click on something that isn’t clickable.
  • Excessive scrolling
    Where people scroll up and down looking for something.
  • Quick backs
    Where people visit a page by mistake and quickly return to the previous page.

Along with exits and bounces, these metrics indicate that something is wrong and should be looked at in more depth.

Diagnosing The Specific Issues

Once you’ve found a problem page, the next challenge is diagnosing exactly what’s going wrong.

I tend to start by looking at heat maps of the page that you can find in Clarity or similar tools. These heatmaps will show you where people are engaged on the page and potentially indicate problems.

If that doesn’t help, I will watch recordings of people showing the problem behavior. Watching these session recordings can provide priceless insights. They show the specific pain points users are facing. They can guide you to potential solutions.

If I am still confused about the problem, I may run a survey. I’ll ask users about their experience. Or, I may recruit some people and run usability testing on the page.

Surveys are easier to run, but can be somewhat disruptive and don’t always provide the desired insights. If I do use a survey, I will normally only display it on exit-intent to minimize disruption to the user experience.

If I run usability testing, I favor facilitated testing in this scenario. Although more time-consuming to run, it allows me to ask questions that almost always uncover the problem on the page. Normally, you can get away with only testing with 3 to 6 people.

Once you’ve identified the specific issue, you can then start experimenting with solutions to address it.

Testing Possible Solutions

There are almost always multiple ways of addressing any given issue, so it’s important to test different approaches to find the best. How you approach this testing will depend on the complexity of your solution.

Sometimes a problem can be fixed with a simple solution involving some UI tweaks or content changes. In this case, you can simply test the variations using A/B testing to see which performs better.

A/B Test Smaller Changes

If you haven’t done A/B testing before, it really isn’t that complicated. The only downside is that A/B testing tools are massively overpriced in my opinion. That said, Crazy Egg is more reasonable (although not as powerful) and there is a free tier with VWO.

Using an A/B testing tool starts by setting a goal, like adding an item to the basket. Then, you make versions of the page with your proposed improvement. These are shown to a percentage of visitors.

Making the changes is normally done through a simple WYSIWYG interface and it only takes a couple of minutes.

If your site has lots of traffic, I would encourage you to explore as many possible solutions as possible. If you have a smaller site, focus on testing just a couple of ideas. Otherwise, it will take forever to see results.

Also, with lower-traffic sites, keep the goal as close to the experiment as possible to maximize the amount of traffic. If there’s a big gap between goal and experiment, a lot of people will drop out during the process, and you’ll have to wait longer for results.

Not that A/B testing is always the right way to test ideas. When your solution is more complex, involving new functionality or multiple screens, A/B testing won’t work well. That’s because to A/B test that level of change, you need to effectively build the solution, negating most of the benefits A/B testing provides.

Prototype And Test Larger Changes

Instead, your best option in such circumstances is to build a prototype that you can test with remote testing.

In the first instance, I tend to run unfacilitated testing using a tool like Maze. Unfacilitated testing is quick to set up, takes little of your time, and Maze will even provide you with analytics on success rates.

But, if unfacilitated testing finds problems and you doubt how to fix them, then consider facilitated testing. That’s because facilitated testing allows you to ask questions and get to the heart of any issues that might arise.

The only drawback of usability testing over A/B testing is recruitment. It can be hard to find the right participants. If that’s the case, consider using a service like Askable, who will carry out recruitment for you for a small fee.

Failing that, don’t be afraid to use friends and family as in most cases getting the exact demographic is less important than you might think. As long as people have comparable physical and cognitive abilities, you shouldn’t have a problem. The only exception is if the content of your website or app is highly specialized.

That said, I would avoid using anybody who works for the organization. They will inevitably be institutionalized and unable to provide unbiased feedback.

Whatever approach you use to test your solution, once you’re happy, you can push that change live for all users. But, your work is still not done.

Rinse And Repeat

Once you’ve solved one issue, return to your analytics. Find the next biggest problem. Repeat the whole process. As you fix some problems, more will become apparent, and so you’ll quickly find yourself with an ongoing program of improvements that can be made.

The more you carry out this kind of work, the more the benefits will become obvious. You will gradually see improvements in metrics like engagement, conversion, and user satisfaction. You can use these metrics to make the case to management for ongoing optimization. This is better than the trap of releasing feature after feature with no regard for their performance.

Meet Fast and Budget-Friendly User Research And Testing

If you are interested in User Research and Testing, check out Paul’s workshop on Fast and Budget-Friendly User Research and Testing, kicking off July 11.

Live workshop with real-life examples.
5h live workshop + friendly Q&A.

]]>
hello@smashingmagazine.com (Paul Boag)
<![CDATA[How I Learned To Stop Worrying And Love Multimedia Writing]]> https://smashingmagazine.com/2024/06/mdx-or-how-i-learned-love-multimedia-writing/ https://smashingmagazine.com/2024/06/mdx-or-how-i-learned-love-multimedia-writing/ Fri, 21 Jun 2024 18:00:00 GMT Prior to the World Wide Web, the act of writing remained consistent for centuries. Words were put on paper, and occasionally, people would read them. The tools might change — quills, printing presses, typewriters, pens, what have you — and an adventurous author may perhaps throw in imagery to compliment their copy.

We all know that the web shook things up. With its arrival, writing could become interactive and dynamic. As web development progressed, the creative possibilities of digital content grew — and continue to grow — exponentially. The line between web writing and web technologies is blurry these days, and by and large, I think that’s a good thing, though it brings its own challenges. As a sometimes-engineer-sometimes-journalist, I straddle those worlds more than most and have grown to view the overlap as the future.

Writing for the web is different from traditional forms of writing. It is not a one-size-fits-all process. I’d like to share the benefits of writing content in digital formats like MDX using a personal project of mine as an example. And, by the end, my hope is to convince you of the greater writing benefits of MDX over more traditional formats.

A Little About Markdown

At its most basic, MDX is Markdown with components in it. For those not in the know, Markdown is a lightweight markup language created by John Gruber in 2003, and it’s everywhere today. GitHub, Trello, Discord — all sorts of sites and services use it. It’s especially popular for authoring blog posts, which makes sense as blogging is very much the digital equivalent of journaling. The syntax doesn’t “get in the way,” and many content management systems support it.

Markdown’s goal is an “easy-to-read and easy-to-write plain text format” that can readily be converted into XHTML/HTML if needed. Since its inception, Markdown was supposed to facilitate a writing workflow that integrated the physical act of writing with digital publishing.

We’ll get to actual examples later, but for the sake of explanation, compare a block of text written in HTML to the same text written in Markdown.

HTML is a pretty legible format as it is:

<h2>Post Title</h2>

<p>This is an example block of text written in HTML. We can link things up like this, or format the code with <strong>bolding</strong> and <em>italics</em>. We can also make lists of items:</p>

<ul>
  <li>Like this item<li>
  <li>Or this one</li>
  <li>Perhaos a third?</li>
</ul>

<img src="image.avif" alt="And who doesn't enjoy an image every now and then?">

But Markdown is somehow even less invasive:

## Post Title

This is an example block of text written in HTML. We can link things up like this or format the code with **bolding** and *italics*. We can also make lists of items:

- Like this item
- Or this one
- Perhaos a third?


I’ve become a Markdown disciple since I first learned to code. Its clean and relatively simple syntax and wide compatibilities make it no wonder that Markdown is as pervasive today as it is. Having structural semantics akin to HTML while preserving the flow of plain text writing is a good place to be.

However, it could be accused of being a bit too clean at times. If you want to communicate with words and images, you’re golden, but if you want to jazz things up, you’ll find yourself looking further afield for other options.

Gruber set out to create a “format for writing for the web,” and given its ongoing popularity, you have to say he succeeded, yet the web 20 years ago is a long way away from what it is today.

This is the all-important context for what I want to discuss about MDX because MDX is an offshoot of Markdown, only more capable of supporting richer forms of multimedia — and even user interaction. But before we get into that, we should also discuss the concept of web components because that’s the second significant piece that MDX brings to the table.

Further Reading

A Little About Components

The move towards richer multimedia websites and apps has led to a thriving ecosystem of web development frameworks and libraries, including React, Vue, Svelte, and Astro, to name a few. The idea that we can have reusable components that are not only interactive but also respond to each other has driven this growth and continues to push on evolving web platform features like web components.

MDX is like a bridge that connects Markdown with modern web tooling. Simply put, MDX weds Markdown’s simplicity with the creative possibilities of modern web frameworks.

By leaning into the overlaps rather than trying to abstract them away at all costs, we find untold potential for beautiful digital content.

Further Reading

A Case Study

My own experience with MDX took shape in a side project of mine: teeline.online. To cut a long story short, before I was a software engineer, I was a journalist, and part of my training involved learning a type of shorthand called Teeline. What it boils down to is ripping out as many superfluous letters as possible — I like to call this process “disemvowelment” — then using Teeline’s alphabet to write the remaining content. This has allowed people like me to write lots of words very quickly.

During my studies, I found online learning resources lacking, so as my engineering skills improved, I started working on the kind of site I’d have used when I was a student if it was available. Hence, teeline.online.

I built the teeling.online site with the Svelte framework for its components. The site’s centerpiece is a dataset of shorthand characters and combinations with which hundreds of outlines can be rendered, combined, and animated as SVG paths.

Likewise, Teeline’s “disemvowelment” script could be wired into a single component that I could then use as many times as I like.

Then, of course, as is only natural when working with components, I could combine them to show the Teeline evolution that converts longhand words into shorthand outlines.

The Markdown, meanwhile, looks as simple as this:

It’s not exactly the sort of complex codebase you might expect for an app. Meanwhile, the files themselves can sit in a nice, tidy directory of their own:

The syllabus is neatly filed away in its own folder. With a bit of metadata sprinkled in, I have everything I need to render an entire section of the site using routing. The setup feels like a fluid medium between worlds. If you want to write with words and pictures, you can. If an idea comes to mind for a component that would better express what you’re going for, you can go make it and drop it in.

In fairness, a “WordToOutline” component like this might not mean much to Teeline newcomers, though with such a clear connection between the Markdown and the rendered pages, it’s not much of a stretch to work out what it is. And, of course, there’s always the likes of services like Storybook that can be used to organize component libraries as they grow.

The raw form of multimedia content can be pretty unsightly — something that needs to be kept at arm’s length by content management systems. With MDX — and its ilk — the content feels rather friendly and legible.

Benefits

I think you can start to see some of the benefits of an MDX setup like this. There are two key benefits in particulart that I think are worth calling out.

Editorial Benefits

First and foremost, MDX doesn’t distract the writing and editorial flow of working with content. When we’re working with traditional code languages, even HTML, the code format is convoluted with things like opening and closing tags. And it’s even more convoluted if we need the added complexity of embedding components in the content.

MDX (and Markdown, for that matter) is much less verbose. Content is a first-class citizen that takes up way less space than typical markup, making it clear and legible. And where we need the complex affordance of components, those can be dropped in without disrupting that nice editorial experience.

Another key benefit of using MDX is reusability. If, for example, I want to display the same information as images instead, each image would have to be bespoke. But we all know how inefficient it is to maintain content in raster images — it requires making edits in a completely different application, which is highly inconvenient. With an old-school approach, if I update the design of the site, I’m left having to create dozens of images in the new style.

With MDX (or an equivalent like MDsveX), I only need to make the change once, and it updates everywhere. Having done the leg work of building reusable components, I can weave them throughout the syllabus as I see fit, safe in the knowledge that updates will roll out across the board — and do it without affecting the editorial experience whatsoever.

Consider the time it would take to create images or videos representing the same thing. Over time, using fixed assets like images becomes a form of technical — or perhaps editorial — debt that adds up over time, while a multimedia approach that leans into components proves to be faster and more flexible than vanilla methods.

Tech Benefits

I just made the point that working with reusable components in MDX allows Markdown content to become more robust without affecting the content’s legibility as we author it. Using Svelte’s version of MDX, MDsveX, I was able to combine the clean, readable conventions of Markdown with the rich, interactive potential of components.

Caveats

It’s only right that all my gushing about MDX and its benefits be tempered with a reality check or two. Like anything else, MDX has its limitations, and your mileage with it will vary.

That said, I believe that those limitations are likely to show up when MDX is perhaps not the best choice for a particular project. There’s a sweet spot that MDX fills and it’s when we need to sprinkle in additional web functionality to the content. We get the best of two worlds: minimal markup and modern web features.

But if components aren’t needed, MDX is overkill when all you need is a clean way to write content that ports nicely into HTML to be consumed by whatever app or platform you use to display it on the web.

Without components, MDX is akin to caring for a skinned elbow with a cast; it’s way more than what’s needed in that situation, and the returns you get from Markdown’s legibility will diminish.

Similarly, if your technical needs go beyond components, you may be looking at a more complex architecture than what MDX can support, and you would be best leaning into what works best for content in the particular framework or stack you’re using.

Code doesn’t age as well as words or images do. An MDX-esque approach does sign you up for the maintenance work of dependency updates, refactoring, and — god forbid — framework migrations. I haven’t had to face the last of those realities yet, though I’d say the first two are well worth it. Indeed, they’re good habits to keep.

Key Takeaways

Writing with MDX continues to be a learning experience for me, but it’s already made a positive impact on my editorial work.

Specifically, I’ve found that MEX improves the quality of my writing. I think more laterally about how to convey ideas.

Is what I’m saying best conveyed in words, an image, or a data visualization? Perhaps an interactive game?

There is way more potential to enhance my words with componentry than I would get with Markdown alone, opening more avenues for what I can say and how I say it.

Of course, those components do not come for free. MDX does sign you up to build those, regardless of whether you have a set of predefined components included in your framework. At the same time, I’d argue that the opportunities MDX opens up for writing greatly outweigh having to build or maintain a few components.

If MDX had been around in the age of Leonardo Di Vinci, perhaps he may have reached for MDX in his journals. I know I’m taking a great leap of assumption here, but the complexity of what he was writing and trying to describe in technical terms with illustrations would have benefited greatly from MDX for everything from interactive demos of his ideas to a better writing experience overall.

Further Reading

Multimedia Writing

In many respects, MDX’s rich, varied way of approaching content is something that Markdown — and writing for the web in general — encourages already. We don’t think only in terms of words but of links, images, and semantic structure. MDX and its equivalents merely take the lid off the cookie jar so we can enhance our work.

Wouldn’t it be nice if… is a redundant turn of phrase on the web. There may be technical hurdles — or, in my case, skill and knowledge hurdles — but it’s a buzz to think about ways in which your thoughts can best manifest on screen.

At the same time, the simplicity of Markdown is so unintrusive. If someone wants to write content formatted in vanilla Markdown, it’s totally possible to do that without trading up to MDX.

Just having the possibility of bespoke multimedia content is enough to change the creative process. It leaves you using words because you want to, not because you have to.

Why describe the solar system when you can render an explorable one? Why have a picture of a proposed skyscraper when you can display a 3D model? Writing with MDX (or, more accurately, MDsveX) has changed my entire thought process. Potential answers to the question, How do I best get this across?, become more expansive.

As You Please

Good things happen when worlds collide. New possibilities emerge when seemingly disparate things come together. Many content management systems shield writers — and writing — from code. To my mind, this is like shielding painters from wider color palettes, chefs from exotic ingredients, or sculptors from different types of tools.

Leaning into the overlap between writing and coding gets us closer to one of the web’s great joys: if you can imagine it, you can probably do it.

]]>
hello@smashingmagazine.com (Frederick O’Brien)
<![CDATA[Uniting Web And Native Apps With 4 Unknown JavaScript APIs]]> https://smashingmagazine.com/2024/06/uniting-web-native-apps-unknown-javascript-apis/ https://smashingmagazine.com/2024/06/uniting-web-native-apps-unknown-javascript-apis/ Thu, 20 Jun 2024 18:00:00 GMT A couple of years ago, four JavaScript APIs that landed at the bottom of awareness in the State of JavaScript survey. I took an interest in those APIs because they have so much potential to be useful but don’t get the credit they deserve. Even after a quick search, I was amazed at how many new web APIs have been added to the ECMAScript specification that aren’t getting their dues and with a lack of awareness and browser support in browsers.

That situation can be a “catch-22”:

An API is interesting but lacks awareness due to incomplete support, and there is no immediate need to support it due to low awareness.

Most of these APIs are designed to power progressive web apps (PWA) and close the gap between web and native apps. Bear in mind that creating a PWA involves more than just adding a manifest file. Sure, it’s a PWA by definition, but it functions like a bookmark on your home screen in practice. In reality, we need several APIs to achieve a fully native app experience on the web. And the four APIs I’d like to shed light on are part of that PWA puzzle that brings to the web what we once thought was only possible in native apps.

You can see all these APIs in action in this demo as we go along.

1. Screen Orientation API

The Screen Orientation API can be used to sniff out the device’s current orientation. Once we know whether a user is browsing in a portrait or landscape orientation, we can use it to enhance the UX for mobile devices by changing the UI accordingly. We can also use it to lock the screen in a certain position, which is useful for displaying videos and other full-screen elements that benefit from a wider viewport.

Using the global screen object, you can access various properties the screen uses to render a page, including the screen.orientation object. It has two properties:

  • type: The current screen orientation. It can be: "portrait-primary", "portrait-secondary", "landscape-primary", or "landscape-secondary".
  • angle: The current screen orientation angle. It can be any number from 0 to 360 degrees, but it’s normally set in multiples of 90 degrees (e.g., 0, 90, 180, or 270).

On mobile devices, if the angle is 0 degrees, the type is most often going to evaluate to "portrait" (vertical), but on desktop devices, it is typically "landscape" (horizontal). This makes the type property precise for knowing a device’s true position.

The screen.orientation object also has two methods:

  • .lock(): This is an async method that takes a type value as an argument to lock the screen.
  • .unlock(): This method unlocks the screen to its default orientation.

And lastly, screen.orientation counts with an "orientationchange" event to know when the orientation has changed.

Browser Support

Finding And Locking Screen Orientation

Let’s code a short demo using the Screen Orientation API to know the device’s orientation and lock it in its current position.

This can be our HTML boilerplate:

<main>
  <p>
    Orientation Type: <span class="orientation-type"></span>
    <br />
    Orientation Angle: <span class="orientation-angle"></span>
  </p>

  <button type="button" class="lock-button">Lock Screen</button>

  <button type="button" class="unlock-button">Unlock Screen</button>

  <button type="button" class="fullscreen-button">Go Full Screen</button>
</main>

On the JavaScript side, we inject the screen orientation type and angle properties into our HTML.

let currentOrientationType = document.querySelector(".orientation-type");
let currentOrientationAngle = document.querySelector(".orientation-angle");

currentOrientationType.textContent = screen.orientation.type;
currentOrientationAngle.textContent = screen.orientation.angle;

Now, we can see the device’s orientation and angle properties. On my laptop, they are "landscape-primary" and .

If we listen to the window’s orientationchange event, we can see how the values are updated each time the screen rotates.

window.addEventListener("orientationchange", () => {
  currentOrientationType.textContent = screen.orientation.type;
  currentOrientationAngle.textContent = screen.orientation.angle;
});

To lock the screen, we need to first be in full-screen mode, so we will use another extremely useful feature: the Fullscreen API. Nobody wants a webpage to pop into full-screen mode without their consent, so we need transient activation (i.e., a user click) from a DOM element to work.

The Fullscreen API has two methods:

  1. Document.exitFullscreen() is used from the global document object,
  2. Element.requestFullscreen() makes the specified element and its descendants go full-screen.

We want the entire page to be full-screen so we can invoke the method from the root element at the document.documentElement object:

const fullscreenButton = document.querySelector(".fullscreen-button");

fullscreenButton.addEventListener("click", async () => {
  // If it is already in full-screen, exit to normal view
  if (document.fullscreenElement) {
    await document.exitFullscreen();
  } else {
    await document.documentElement.requestFullscreen();
  }
});

Next, we can lock the screen in its current orientation:

const lockButton = document.querySelector(".lock-button");

lockButton.addEventListener("click", async () => {
  try {
    await screen.orientation.lock(screen.orientation.type);
  } catch (error) {
    console.error(error);
  }
});

And do the opposite with the unlock button:

const unlockButton = document.querySelector(".unlock-button");

unlockButton.addEventListener("click", () => {
  screen.orientation.unlock();
});

Can’t We Check Orientation With a Media Query?

Yes! We can indeed check page orientation via the orientation media feature in a CSS media query. However, media queries compute the current orientation by checking if the width is “bigger than the height” for landscape or “smaller” for portrait. By contrast,

The Screen Orientation API checks for the screen rendering the page regardless of the viewport dimensions, making it resistant to inconsistencies that may crop up with page resizing.

You may have noticed how PWAs like Instagram and X force the screen to be in portrait mode even when the native system orientation is unlocked. It is important to notice that this behavior isn’t achieved through the Screen Orientation API, but by setting the orientation property on the manifest.json file to the desired orientation type.

2. Device Orientation API

Another API I’d like to poke at is the Device Orientation API. It provides access to a device’s gyroscope sensors to read the device’s orientation in space; something used all the time in mobile apps, mainly games. The API makes this happen with a deviceorientation event that triggers each time the device moves. It has the following properties:

  • event.alpha: Orientation along the Z-axis, ranging from 0 to 360 degrees.
  • event.beta: Orientation along the X-axis, ranging from -180 to 180 degrees.
  • event.gamma: Orientation along the Y-axis, ranging from -90 to 90 degrees.

Browser Support

Moving Elements With Your Device

In this case, we will make a 3D cube with CSS that can be rotated with your device! The full instructions I used to make the initial CSS cube are credited to David DeSandro and can be found in his introduction to 3D transforms.

To rotate the cube, we change its CSS transform properties according to the device orientation data:

const currentAlpha = document.querySelector(".currentAlpha");
const currentBeta = document.querySelector(".currentBeta");
const currentGamma = document.querySelector(".currentGamma");

const cube = document.querySelector(".cube");

window.addEventListener("deviceorientation", (event) => {
  currentAlpha.textContent = event.alpha;
  currentBeta.textContent = event.beta;
  currentGamma.textContent = event.gamma;

  cube.style.transform = rotateX(${event.beta}deg) rotateY(${event.gamma}deg) rotateZ(${event.alpha}deg);
});

This is the result:

3. Vibration API

Let’s turn our attention to the Vibration API, which, unsurprisingly, allows access to a device’s vibrating mechanism. This comes in handy when we need to alert users with in-app notifications, like when a process is finished or a message is received. That said, we have to use it sparingly; no one wants their phone blowing up with notifications.

There’s just one method that the Vibration API gives us, and it’s all we need: navigator.vibrate().

vibrate() is available globally from the navigator object and takes an argument for how long a vibration lasts in milliseconds. It can be either a number or an array of numbers representing a patron of vibrations and pauses.

navigator.vibrate(200); // vibrate 200ms
navigator.vibrate([200, 100, 200]); // vibrate 200ms, wait 100, and vibrate 200ms.

Browser Support

Vibration API Demo

Let’s make a quick demo where the user inputs how many milliseconds they want their device to vibrate and buttons to start and stop the vibration, starting with the markup:

<main>
  <form>
    <label for="milliseconds-input">Milliseconds:</label>
    <input type="number" id="milliseconds-input" value="0" />
  </form>

  <button class="vibrate-button">Vibrate</button>
  <button class="stop-vibrate-button">Stop</button>
</main>

We’ll add an event listener for a click and invoke the vibrate() method:

const vibrateButton = document.querySelector(".vibrate-button");
const millisecondsInput = document.querySelector("#milliseconds-input");

vibrateButton.addEventListener("click", () => {
  navigator.vibrate(millisecondsInput.value);
});

To stop vibrating, we override the current vibration with a zero-millisecond vibration.

const stopVibrateButton = document.querySelector(".stop-vibrate-button");

stopVibrateButton.addEventListener("click", () => {
  navigator.vibrate(0);
});
4. Contact Picker API

In the past, it used to be that only native apps could connect to a device’s “contacts”. But now we have the fourth and final API I want to look at: the Contact Picker API.

The API grants web apps access to the device’s contact lists. Specifically, we get the contacts.select() async method available through the navigator object, which takes the following two arguments:

  • properties: This is an array containing the information we want to fetch from a contact card, e.g., "name", "address", "email", "tel", and "icon".
  • options: This is an object that can only contain the multiple boolean property to define whether or not the user can select one or multiple contacts at a time.

Browser Support

I’m afraid that browser support is next to zilch on this one, limited to Chrome Android, Samsung Internet, and Android’s native web browser at the time I’m writing this.

Selecting User’s Contacts

We will make another demo to select and display the user’s contacts on the page. Again, starting with the HTML:

<main>
  <button class="get-contacts">Get Contacts</button>
  <p>Contacts:</p>
  <ul class="contact-list">
    <!-- We’ll inject a list of contacts -->
  </ul>
</main>

Then, in JavaScript, we first construct our elements from the DOM and choose which properties we want to pick from the contacts.

const getContactsButton = document.querySelector(".get-contacts");
const contactList = document.querySelector(".contact-list");

const props = ["name", "tel", "icon"];
const options = {multiple: true};

Now, we asynchronously pick the contacts when the user clicks the getContactsButton.


const getContacts = async () => {
  try {
    const contacts = await navigator.contacts.select(props, options);
  } catch (error) {
    console.error(error);
  }
};

getContactsButton.addEventListener("click", getContacts);

Using DOM manipulation, we can then append a list item to each contact and an icon to the contactList element.

const appendContacts = (contacts) => {
  contacts.forEach(({name, tel, icon}) => {
    const contactElement = document.createElement("li");

    contactElement.innerText = ${name}: ${tel};
    contactList.appendChild(contactElement);
  });
};

const getContacts = async () => {
  try {
    const contacts = await navigator.contacts.select(props, options);
    appendContacts(contacts);
  } catch (error) {
    console.error(error);
  }
};

getContactsButton.addEventListener("click", getContacts);

Appending an image is a little tricky since we will need to convert it into a URL and append it for each item in the list.

const getIcon = (icon) => {
  if (icon.length > 0) {
    const imageUrl = URL.createObjectURL(icon[0]);
    const imageElement = document.createElement("img");
    imageElement.src = imageUrl;

    return imageElement;
  }
};

const appendContacts = (contacts) => {
  contacts.forEach(({name, tel, icon}) => {
    const contactElement = document.createElement("li");

    contactElement.innerText = ${name}: ${tel};
    contactList.appendChild(contactElement);

    const imageElement = getIcon(icon);
    contactElement.appendChild(imageElement);
  });
};

const getContacts = async () => {
  try {
    const contacts = await navigator.contacts.select(props, options);
    appendContacts(contacts);
  } catch (error) {
    console.error(error);
  }
};

getContactsButton.addEventListener("click", getContacts);

And here’s the outcome:

Note: The Contact Picker API will only work if the context is secure, i.e., the page is served over https:// or wss:// URLs.

Conclusion

There we go, four web APIs that I believe would empower us to build more useful and robust PWAs but have slipped under the radar for many of us. This is, of course, due to inconsistent browser support, so I hope this article can bring awareness to new APIs so we have a better chance to see them in future browser updates.

Aren’t they interesting? We saw how much control we have with the orientation of a device and its screen as well as the level of access we get to access a device’s hardware features, i.e. vibration, and information from other apps to use in our own UI.

But as I said much earlier, there’s a sort of infinite loop where a lack of awareness begets a lack of browser support. So, while the four APIs we covered are super interesting, your mileage will inevitably vary when it comes to using them in a production environment. Please tread cautiously and refer to Caniuse for the latest support information, or check for your own devices using WebAPI Check.

]]>
hello@smashingmagazine.com (Juan Diego Rodríguez)
<![CDATA[T-Shaped vs. V-Shaped Designers]]> https://smashingmagazine.com/2024/06/t-shaped-vs-v-shaped-designers/ https://smashingmagazine.com/2024/06/t-shaped-vs-v-shaped-designers/ Wed, 19 Jun 2024 12:00:00 GMT Many job openings in UX assume very specific roles with very specific skills. Product designers should be skilled in Figma. Researchers should know how to conduct surveys. UX writers must be able to communicate brand values.

This article is part of our ongoing series on UX. You might want to take a look at Smart Interface Design Patterns 🍣 and the upcoming live UX training as well. Use code BIRDIE to save 15% off.

The Many Roles In UX

Successful candidates must neatly fit within established roles and excel at tools and workflows that are perceived as the best practice in the industry — from user needs to business needs and from the problem space to the solution space.

There is nothing wrong with that, of course. However, many companies don’t exactly know what expertise they actually need until they find the right person who actually has it. But too often, job openings don’t allow for any flexibility unless the candidate checks off the right boxes.

In fact, typically, UX roles have to fit into some of those rigorously defined and refined boxes:

“V”-Shaped Designers Don’t Fit Into Boxes

Job openings typically cast a very restrictive frame for candidates. It comes with a long list of expectations and requirements, mostly aimed at T-shaped designers — experts in one area of UX, with a high-level understanding of adjacent areas and perhaps a dash of expertise in business and operations.

But as Brad Frost noted, people don’t always fit squarely into a specific discipline. Their value comes not from staying within the boundaries of their roles but from intentionally crossing these boundaries. They are “V”-shaped — experts in one or multiple areas, with a profound understanding and immense curiosity in adjacent areas.

In practice, they excel at bridging the gaps and connecting the dots. They establish design KPIs and drive accessibility efforts. They streamline handoff and scale design systems. But to drive success, they need to rely on specialists, their T-shaped colleagues.

Shaping Your Own Boxes

I sincerely wish more companies would encourage their employees to shape their own boxes instead of defining confined boxes for them — their own unique boxes of any form and shade and color and size employees desire, along with deliverables that other teams would benefit from and could build upon.

🏔️ Hiring? → Maybe replace a long list of mandatory requirements with an open invitation to apply, even if it’s not a 100% match — as long as a candidate believes they can do their best work for the job at hand.

🎢 Seek a challenge? → Don’t feel restricted by your current role in a company. Explore where you drive the highest impact, shape this role, and suggest it.

Searching for a job? → Don’t get discouraged if you don’t tick all the boxes in a promising job opening. Apply! Just explain in fine detail what you bring to the table.

You’ve got this — and good luck, everyone! ✊🏽

Meet Smart Interface Design Patterns

If you are interested in UX and design patterns, take a look at Smart Interface Design Patterns, our 10h-video course with 100s of practical examples from real-life projects — with a live UX training later this year. Everything from mega-dropdowns to complex enterprise tables — with 5 new segments added every year. Jump to a free preview.

Meet Smart Interface Design Patterns, our video course on interface design & UX.

100 design patterns & real-life examples.
10h-video course + live UX training. Free preview.

]]>
hello@smashingmagazine.com (Vitaly Friedman)
<![CDATA[Useful Email Newsletters For Designers]]> https://smashingmagazine.com/2024/06/useful-email-newsletters-for-designers/ https://smashingmagazine.com/2024/06/useful-email-newsletters-for-designers/ Mon, 17 Jun 2024 18:00:00 GMT Struggling to keep our inboxes under control and aim for that magical state of inbox zero, the notification announcing an incoming email isn’t the most appreciated sound for many of us. However, there are some emails to actually look forward to: A newsletter, curated and written with love and care, can be a nice break in your daily routine, providing new insights and sparking ideas and inspiration for your work.

With so many wonderful design newsletters out there, we know it can be a challenge to decide which newsletter (or newsletters) to subscribe to. That’s why we want to shine a light on some newsletter gems today to make your decision at least a bit easier — and help you discover newsletters you might not have heard of yet. Ranging from design systems to UX writing, motion design, and user research, there sure is something in it for you.

A huge thank you to everyone who writes, edits, and publishes these newsletters to help us all get better at our craft. You are truly smashing! 👏🏼👏🏽👏🏾

Table of Contents

Below you’ll find quick jumps to newsletters on specific topics you might be interested in. Scroll down to browse the complete list or skip the table of contents.

Design & Front-End

HeyDesigner

🗓 Delivered every Monday
🖋 Written by Tamas Sari

Aimed at product people, UXers, PMs, and design engineers, the HeyDesigner newsletter is packed with a carefully curated selection of the latest design and front-end articles, tools, and resources.

Pixels of the Week

🗓 Delivered weekly
🖋 Written by Stéphanie Walter

Stéphanie Walter’s Pixels of the Week newsletter keeps you informed about the latest UX research, design, tech (HTML, CSS, SVG) news, tools, methods, and other resources that caught Stéphanie’s interest.

TLDR Design

🗓 Delivered daily
🖋 Written by Dan Ni

You’re looking for some bite-sized design inspiration? TLDR Design is a daily newsletter highlighting news, tools, tutorials, trends, and inspiration for design professionals.

DesignOps

🗓 Delivered every two weeks
🖋 Written by Ch'an Armstrong

The DesignOps newsletter provides the DesignOps community with the best hand-picked articles all around design, code, AI, design tools, no-code tools, developer tools, and, of course, design ops.

Adam Silver’s Newsletter

🗓 Delivered weekly
🖋 Written by Adam Silver

Every week, Adam Silver sends out a newsletter aimed at designers, content designers, and front-end developers. It includes short and sweet, evidence-based design tips, mostly about forms UX, but not always.

Smashing Newsletter

🗓 Delivered every Tuesday
🖋 Written by the Smashing Editorial team

Every Tuesday, we publish the Smashing Newsletter with useful tips and techniques on front-end and UX, covering everything from design systems and UX research to CSS and JavaScript. Each issue is curated, written, and edited with love and care, no third-party mailings or hidden advertising.

UX

UX Design Weekly

🗓 Delivered every Monday
🖋 Written by Kenny Chen

UX Design Weekly provides you with a weekly dose of hand-picked user experience design links. Every issue features articles, tools and resources, a UX portfolio, and a quote to spark ideas and get you thinking.

UX Collective

🗓 Delivered weekly
🖋 Written by Fabricio Teixeira and Caio Braga

“Designers are thinkers as much as they are makers.” Following this credo, the UX Collective newsletter helps designers think more critically about their work. Every issue highlights thought-provoking reads, little gems, tools, and resources.

Built For Mars

🗓 Delivered every few weeks
🖋 Written by Peter Ramsey

The Built for Mars newsletter brings Peter Ramsey’s UX research straight to your inbox. It includes in-depth UX case studies and bite-sized UX ideas and experiments.

NN Group

🗓 Delivered weekly
🖋 Written by the Nielsen Norman Group

Studying users around the world, the Nielsen Norman Group provides research-based UX guidance. If you don’t want to miss their latest articles and videos about usability, design, and UX research, you can subscribe to the NN/g newsletter to stay up-to-date.

UX Notebook

🗓 Delivered weekly
🖋 Written by Sarah Doody

The UX Notebook Newsletter is aimed at UX and product professionals who want to learn how to apply UX and design principles to design and grow their teams, products, and careers.

Smart Interface Design Patterns

🗓 Delivered weekly
🖋 Written by Vitaly Friedman

Every issue of the Smart Interface Design Patterns newsletter is dedicated to a common interface challenge and how to solve it to avoid issues down the line. A treasure chest of design patterns and UX techniques.

UX Weekly

🗓 Delivered weekly
🖋 Written by the Interaction Design Foundation

The Interaction Design Foundation is known for their UX courses and webinars for both aspiring designers and advanced professionals. Their UX Weekly newsletter delivers design tips and educational material to help you leverage the power of design.

Design With Care

🗓 Delivered every first Tuesday of a month
🖋 Written by Alex Bilstein

Healthcare systems desparately need UX designers to improve the status quo for both healthcare professionals and patients. The Design With Care newsletter empowers UX designers to create better healthcare experiences and make an impact that matters.

UX Writing & Content Strategy

The UX Gal

🗓 Delivered every Monday
🖋 Written by Slater Katz

Whether you’re about to start your UX content education or want to get better at UX writing, The UX Gal newsletter is for you. Every Monday, Slater Katz sends out a new newsletter with prompts, thoughts, and exercises to build your UX writing and content design skills.

UX Content Collective

🗓 Delivered weekly
🖋 Written by the UX Content Collective

The newsletter by the UX Content Collective is perfect for anyone interested in content design. In it, you’ll find curated UX writing resources, new job openings, and exclusive discounts.

GatherContent

🗓 Delivered weekly
🖋 Written by the GatherContent team

The GatherContent newsletter is a weekly email full of content strategy goodies. It features articles, webinars and masterclasses, new books, free templates, and industry news.

User Research

User Research Academy

🗓 Delivered weekly
🖋 Written by Nikki Anderson

If you want to get more creative and confident when conducting user research, the User Research Academy might be for you. With carefully curated articles, podcasts, events, books, and academic resources all around user research, the newsletter is perfect for beginners and senior UX researchers alike.

User Weekly

🗓 Delivered weekly
🖋 Written by Jan Ahrend

What mattered in UX research this week? To keep you up-to-date on trends, methods, and insights across the UX research industry, Jan Ahrend captures the pulse of the UX research community in his User Weekly newsletter.

User Interviews

🗓 Delivered weekly
🖋 Written by the User Interviews team

The UX Research Newsletter by the folks at User Interviews delivers the latest UX research articles, reports, podcast episodes, and special features. For professional user researchers just like teams who need to conduct user research without a dedicated research team.

Baymard Institute

🗓 Delivered weekly
🖋 Written by the Baymard Institute

User experience, web design, and e-commerce are the topics which the Baymard Institute newsletter covers. It features ad-free full-length research articles to give you precious insights into the field.

Interaction Design

Design Spells

🗓 Delivered every other Sunday
🖋 Written by Chester How, Duncan Leo, and Rick Lee

Whether it’s micro-interactions or easter eggs, Design Spells celebrates the design details that feel like magic and add a spark of delight to a design.

Justin Volz’s Newsletter

🖋 Written by Justin Volz

Getting you ready for the future of motion design is the goal of Justin Volz’s newsletter. It features UX motion design trends, new UX motion design articles, and more to “make your UI tap dance.”

Design Systems & Figma

Design System Guide

🗓 Delivered weekly
🖋 Written by Romina Kavcic

Accompanying her interactive step-by-step guide to design systems, Romina Kavcic sends out the weekly Design System Guide newsletter on all things design systems, design process, and design strategy.

Figmalion

🗓 Delivered weekly
🖋 Written by Eugene Fedorenko

The Figmalion newsletter keeps you up-to-date on what is happening in the Figma community, with curated design resources and a weekly roundup of Figma and design tool news.

Information Architecture

Informa(c)tion

🗓 Delivered every other Sunday
🖋 Written by Jorge Arango

The Informa(c)tion newsletter explores the intersection of information, cognition, and design. Each issue includes an essay about information architecture and/or personal knowledge management and a list of interesting links.

Product Design

Product Design Challenges

🗓 Delivered weekly
🖋 Written by Artiom Dashinsky

How about a weekly design challenge to work on your core design skills, improve your portfolio, or prepare for your next job interview? The Weekly Product Design Challenges newsletter has got you covered. Every week, Artiom Dashinsky shares a new exercise inspired and used by companies like Facebook, Google, and WeWork to interview UX design candidates.

Fundament

🗓 Delivered every other Thursday
🖋 Written by Arkadiusz Radek and Mateusz Litarowicz

With Fundament, Arkadiusz Radek and Mateusz Litarowicz created a place to share what they’ve learned in their ten-year UX and Product Design careers. The newsletter is about the things that matter in design, the practicalities of the job, the lesser-known bits, and content that will help you grow as a UX or Product Designer.

Case Study Club

🗓 Delivered weekly
🖋 Written by Jan Haaland

How do people design digital products? With curated UX case studies, the Case Study Club newsletter grants insights into other designers’ processes.

Ethical Design & Sustainability

Ethical Design Network

🗓 Delivered monthly
🖋 Written by Trine Falbe

The Ethical Design Network is a space for digital professionals to share, discuss, and self-educate about ethical design. You can sign up to the newsletter to receive monthly news, resources, and event updates all around ethical design.

Sustainable UX

🗓 Delivered monthly
🖋 Written by Thorsten Jonas

As designers, we have to take responsibility for more than our users. Shining a light on how to design and build more sustainable digital products, the SUX Newsletter by the Sustainable UX Network helps you stand up to that responsibility.

AI

AI Goodies

🗓 Delivered weekly
🖋 Written by Ioana Teleanu

A brand-new newsletter on AI, design, and UX goodies comes from Ioana Teleanu: AI Goodies. Every week, it covers the latest resources, trends, news, and tools from the world of AI.

Business

d.MBA

🗓 Delivered weekly
🖋 Written by Alen Faljic

Learning business can help you become a better designer. The d.MBA newsletter is your weekly source of briefings from the business world, hand-picked for the design community by Alen Faljic and the d.MBA team.

Career & Leadership

Dan Mall Teaches

🗓 Delivered weekly
🖋 Written by Dan Mall

Tips, tricks, and tools about design systems, process, and leadership, delivered to your inbox every week. That’s the Dan Mall Teaches newsletter.

Stratatics

🗓 Delivered weekly
🖋 Written by Ryan Rumsey

To do things differently, you must look at your work in a new light. That’s the idea behind the Stratatics newsletter. Each week, Ryan Rumsey provides design leaders and executives (and those who work alongside them) with a new idea to reimagine and deliver their best work.

Spread The Word

Do you have a favorite newsletter that isn’t featured in the post? Or maybe you’re writing and publishing a newsletter yourself? We’d love to hear about it in the comments below!

]]>
hello@smashingmagazine.com (Cosima Mielke)
<![CDATA[What Are CSS Container Style Queries Good For?]]> https://smashingmagazine.com/2024/06/what-are-css-container-style-queries-good-for/ https://smashingmagazine.com/2024/06/what-are-css-container-style-queries-good-for/ Fri, 14 Jun 2024 11:00:00 GMT We’ve relied on media queries for a long time in the responsive world of CSS but they have their share of limitations and have shifted focus more towards accessibility than responsiveness alone. This is where CSS Container Queries come in. They completely change how we approach responsiveness, shifting the paradigm away from a viewport-based mentality to one that is more considerate of a component’s context, such as its size or inline-size.

Querying elements by their dimensions is one of the two things that CSS Container Queries can do, and, in fact, we call these container size queries to help distinguish them from their ability to query against a component’s current styles. We call these container style queries.

Existing container query coverage has been largely focused on container size queries, which enjoy 90% global browser support at the time of this writing. Style queries, on the other hand, are only available behind a feature flag in Chrome 111+ and Safari Technology Preview.

The first question that comes to mind is What are these style query things? followed immediately by How do they work?. There are some nice primers on them that others have written, and they are worth checking out.

But the more interesting question about CSS Container Style Queries might actually be Why we should use them? The answer, as always, is nuanced and could simply be it depends. But I want to poke at style queries a little more deeply, not at the syntax level, but what exactly they are solving and what sort of use cases we would find ourselves reaching for them in our work if and when they gain browser support.

Why Container Queries

Talking purely about responsive design, media queries have simply fallen short in some aspects, but I think the main one is that they are context-agnostic in the sense that they only consider the viewport size when applying styles without involving the size or dimensions of an element’s parent or the content it contains.

This usually isn’t a problem since we only have a main element that doesn’t share space with others along the x-axis, so we can style our content depending on the viewport’s dimensions. However, if we stuff an element into a smaller parent and maintain the same viewport, the media query doesn’t kick in when the content becomes cramped. This forces us to write and manage an entire set of media queries that target super-specific content breakpoints.

Container queries break this limitation and allow us to query much more than the viewport’s dimensions.

How Container Queries Generally Work

Container size queries work similarly to media queries but allow us to apply styles depending on the container’s properties and computed values. In short, they allow us to make style changes based on an element’s computed width or height regardless of the viewport. This sort of thing was once only possible with JavaScript or the ol’ jQuery, as this example shows.

As noted earlier, though, container queries can query an element’s styles in addition to its dimensions. In other words, container style queries can look at and track an element’s properties and apply styles to other elements when those properties meet certain conditions, such as when the element’s background-color is set to hsl(0 50% 50%).

That’s what we mean when talking about CSS Container Style Queries. It’s a proposed feature defined in the same CSS Containment Module Level 3 specification as CSS Container Size Queries — and one that’s currently unsupported by any major browser — so the difference between style and size queries can get a bit confusing as we’re technically talking about two related features under the same umbrella.

We’d do ourselves a favor to backtrack and first understand what a “container” is in the first place.

Containers

An element’s container is any ancestor with a containment context; it could be the element’s direct parent or perhaps a grandparent or great-grandparent.

A containment context means that a certain element can be used as a container for querying. Unofficially, you can say there are two types of containment context: size containment and style containment.

Size containment means we can query and track an element’s dimensions (i.e., aspect-ratio, block-size, height, inline-size, orientation, and width) with container size queries as long as it’s registered as a container. Tracking an element’s dimensions requires a little processing in the client. One or two elements are a breeze, but if we had to constantly track the dimensions of all elements — including resizing, scrolling, animations, and so on — it would be a huge performance hit. That’s why no element has size containment by default, and we have to manually register a size query with the CSS container-type property when we need it.

On the other hand, style containment lets us query and track the computed values of a container’s specific properties through container style queries. As it currently stands, we can only check for custom properties, e.g. --theme: dark, but soon we could check for an element’s computed background-color and display property values. Unlike size containment, we are checking for raw style properties before they are processed by the browser, alleviating performance and allowing all elements to have style containment by default.

Did you catch that? While size containment is something we manually register on an element, style containment is the default behavior of all elements. There’s no need to register a style container because all elements are style containers by default.

And how do we register a containment context? The easiest way is to use the container-type property. The container-type property will give an element a containment context and its three accepted values — normal, size, and inline-size — define which properties we can query from the container.

/* Size containment in the inline direction */
.parent {
  container-type: inline-size;
}

This example formally establishes a size containment. If we had done nothing at all, the .parent element is already a container with a style containment.

Size Containment

That last example illustrates size containment based on the element’s inline-size, which is a fancy way of saying its width. When we talk about normal document flow on the web, we’re talking about elements that flow in an inline direction and a block direction that corresponds to width and height, respectively, in a horizontal writing mode. If we were to rotate the writing mode so that it is vertical, then “inline” would refer to the height instead and “block” to the width.

Consider the following HTML:

<div class="cards-container">
  <ul class="cards">
    <li class="card"></li>
  </ul>
</div>

We could give the .cards-container element a containment context in the inline direction, allowing us to make changes to its descendants when its width becomes too small to properly display everything in the current layout. We keep the same syntax as in a normal media query but swap @media for @container

.cards-container {
  container-type: inline-size;
  }

  @container (width < 700px) {
  .cards {
    background-color: red;
  }
}

Container syntax works almost the same as media queries, so we can use the and, or, and not operators to chain different queries together to match multiple conditions.

@container (width < 700px) or (width > 1200px) {
  .cards {
    background-color: red;
  }
}

Elements in a size query look for the closest ancestor with size containment so we can apply changes to elements deeper in the DOM, like the .card element in our earlier example. If there is no size containment context, then the @container at-rule won’t have any effect.

/* 👎 
 * Apply styles based on the closest container, .cards-container
 */
@container (width < 700px) {
  .card {
    background-color: black;
  }
}

Just looking for the closest container is messy, so it’s good practice to name containers using the container-name property and then specifying which container we’re tracking in the container query just after the @container at-rule.

.cards-container {
  container-name: cardsContainer;
  container-type: inline-size;
}

@container cardsContainer (width < 700px) {
  .card {
    background-color: #000;
  }
}

We can use the shorthand container property to set the container name and type in a single declaration:

.cards-container {
  container: cardsContainer / inline-size;

  /* Equivalent to: */
  container-name: cardsContainer;
  container-type: inline-size;
}

The other container-type we can set is size, which works exactly like inline-size — only the containment context is both the inline and block directions. That means we can also query the container’s height sizing in addition to its width sizing.

/* When container is less than 700px wide */
@container (width < 700px) {
  .card {
    background-color: black;
  }
}

/* When container is less than 900px tall */
@container (height < 900px) {
  .card {
    background-color: white;
  }
}

And it’s worth noting here that if two separate (not chained) container rules match, the most specific selector wins, true to how the CSS Cascade works.

So far, we’ve touched on the concept of CSS Container Queries at its most basic. We define the type of containment we want on an element (we looked specifically at size containment) and then query that container accordingly.

Container Style Queries

The third value that is accepted by the container-type property is normal, and it sets style containment on an element. Both inline-size and size are stable across all major browsers, but normal is newer and only has modest support at the moment.

I consider normal a bit of an oddball because we don’t have to explicitly declare it on an element since all elements are style containers with style containment right out of the box. It’s possible you’ll never write it out yourself or see it in the wild.

.parent {
  /* Unnecessary */
  container-type: normal;
}

If you do write it or see it, it’s likely to undo size containment declared somewhere else. But even then, it’s possible to reset containment with the global initial or revert keywords.

.parent {
  /* All of these (re)set style containment */
  container-type: normal;
  container-type: initial;
  container-type: revert;
}

Let’s look at a simple and somewhat contrived example to get the point across. We can define a custom property in a container, say a --theme.

.cards-container {
  --theme: dark;
}

From here, we can check if the container has that desired property and, if it does, apply styles to its descendant elements. We can’t directly style the container since it could unleash an infinite loop of changing the styles and querying the styles.

.cards-container {
  --theme: dark;
}

@container style(--theme: dark) {
  .cards {
    background-color: black;
  }
}

See that style() function? In the future, we may want to check if an element has a max-width: 400px through a style query instead of checking if the element’s computed value is bigger than 400px in a size query. That’s why we use the style() wrapper to differentiate style queries from size queries.

/* Size query */
@container (width > 60ch) {
  .cards {
    flex-direction: column;
  }
}

/* Style query */
@container style(--theme: dark) {
  .cards {
    background-color: black;
  }
}

Both types of container queries look for the closest ancestor with a corresponding containment-type. In a style() query, it will always be the parent since all elements have style containment by default. In this case, the direct parent of the .cards element in our ongoing example is the .cards-container element. If we want to query non-direct parents, we will need the container-name property to differentiate between containers when making a query.

.cards-container {
  container-name: cardsContainer;
  --theme: dark;
}

@container cardsContainer style(--theme: dark) {
  .card {
    color: white;
  }
}
Weird and Confusing Things About Container Style Queries

Style queries are completely new and bring something never seen in CSS, so they are bound to have some confusing qualities as we wrap our heads around them — some that are completely intentional and well thought-out and some that are perhaps unintentional and may be updated in future versions of the specification.

Style and Size Containment Aren’t Mutually Exclusive

One intentional perk, for example, is that a container can have both size and style containment. No one would fault you for expecting that size and style containment are mutually exclusive concerns, so setting an element to something like container-type: inline-size would make all style queries useless.

However, another funny thing about container queries is that elements have style containment by default, and there isn’t really a way to remove it. Check out this next example:

.cards-container {
  container-type: inline-size;
  --theme: dark;
}

@container style(--theme: dark) {
  .card {
    background-color: black;
  }
}

@container (width < 700px) {
  .card {
    background-color: red;
  }
}

See that? We can still query the elements by style even when we explicitly set the container-type to inline-size. This seems contradictory at first, but it does make sense, considering that style and size queries are computed independently. It’s better this way since both queries don’t necessarily conflict with each other; a style query could change the colors in an element depending on a custom property, while a container query changes an element’s flex-direction when it gets too small for its contents.

But We Can Achieve the Same Thing With CSS Classes and IDs

Most container query guides and tutorials I’ve seen use similar examples to demonstrate the general concept, but I can’t stop thinking no matter how cool style queries are, we can achieve the same result using classes or IDs and with less boilerplate. Instead of passing the state as an inline style, we could simply add it as a class.

<ol>
  <li class="item first">
    <img src="..." alt="Roi's avatar" />
    <h2>Roi</h2>
  </li>
  <li class="item second"><!-- etc. --></li>
  <li class="item third"><!-- etc. --></li>
  <li class="item"><!-- etc. --></li>
  <li class="item"><!-- etc. --></li>
</ol>

Alternatively, we could add the position number directly inside an id so we don’t have to convert the number into a string:

<ol>
  <li class="item" id="item-1">
    <img src="..." alt="Roi's avatar" />
    <h2>Roi</h2>
  </li>
  <li class="item" id="item-2"><!-- etc. --></li>
  <li class="item" id="item-3"><!-- etc. --></li>
  <li class="item" id="item-4"><!-- etc. --></li>
  <li class="item" id="item-5"><!-- etc. --></li>
</ol>

Both of these approaches leave us with cleaner HTML than the container queries approach. With style queries, we have to wrap our elements inside a container — even if we don’t semantically need it — because of the fact that containers (rightly) are unable to style themselves.

We also have less boilerplate-y code on the CSS side:

#item-1 {
  background: linear-gradient(45deg, yellow, orange); 
}

#item-2 {
  background: linear-gradient(45deg, grey, white);
}

#item-3 {
  background: linear-gradient(45deg, brown, peru);
}

See the Pen Style Queries Use Case Replaced with Classes [forked] by Monknow.

As an aside, I know that using IDs as styling hooks is often viewed as a no-no, but that’s only because IDs must be unique in the sense that no two instances of the same ID are on the page at the same time. In this instance, there will never be more than one first-place, second-place, or third-place player on the page, making IDs a safe and appropriate choice in this situation. But, yes, we could also use some other type of selector, say a data-* attribute.

There is something that could add a lot of value to style queries: a range syntax for querying styles. This is an open feature that Miriam Suzanne proposed in 2023, the idea being that it queries numerical values using range comparisons just like size queries.

Imagine if we wanted to apply a light purple background color to the rest of the top ten players in the leaderboard example. Instead of adding a query for each position from four to ten, we could add a query that checks a range of values. The syntax is obviously not in the spec at this time, but let’s say it looks something like this just to push the point across:

/* Do not try this at home! */
@container leaderboard style(4 >= --position <= 10) {
  .item {
    background: linear-gradient(45deg, purple, fuchsia);
  }
}

In this fictional and hypothetical example, we’re:

  • Tracking a container called leaderboard,
  • Making a style() query against the container,
  • Evaluating the --position custom property,
  • Looking for a condition where the custom property is set to a value equal to a number that is greater than or equal to 4 and less than or equal to 10.
  • If the custom property is a value within that range, we set a player’s background color to a linear-gradient() that goes from purple to fuschia.

This is very cool, but if this kind of behavior is likely to be done using components in modern frameworks, like React or Vue, we could also set up a range in JavaScript and toggle on a .top-ten class when the condition is met.

See the Pen Style Ranged Queries Use Case Replaced with Classes [forked] by Monknow.

Sure, it’s great to see that we can do this sort of thing directly in CSS, but it’s also something with an existing well-established solution.

Separating Style Logic From Logic Logic

So far, style queries don’t seem to be the most convenient solution for the leaderboard use case we looked at, but I wouldn’t deem them useless solely because we can achieve the same thing with JavaScript. I am a big advocate of reaching for JavaScript only when necessary and only in sprinkles, but style queries, the ones where we can only check for custom properties, are most likely to be useful when paired with a UI framework where we can easily reach for JavaScript within a component. I have been using Astro an awful lot lately, and in that context, I don’t see why I would choose a style query over programmatically changing a class or ID.

However, a case can be made that implementing style logic inside a component is messy. Maybe we should keep the logic regarding styles in the CSS away from the rest of the logic logic, i.e., the stateful changes inside a component like conditional rendering or functions like useState and useEffect in React. The style logic would be the conditional checks we do to add or remove class names or IDs in order to change styles.

If we backtrack to our leaderboard example, checking a player’s position to apply different styles would be style logic. We could indeed check that a player’s leaderboard position is between four and ten using JavaScript to programmatically add a .top-ten class, but it would mean leaking our style logic into our component. In React (for familiarity, but it would be similar to other frameworks), the component may look like this:

const LeaderboardItem = ({position}) => {
  <li className={item ${position &gt;= 4 && position &lt;= 10 ? "top-ten" : ""}} id={item-${position}}>
    <img src="..." alt="Roi's avatar" />
    <h2>Roi</h2>
  </li>;
};

Besides this being ugly-looking code, adding the style logic in JSX can get messy. Meanwhile, style queries can pass the --position value to the styles and handle the logic directly in the CSS where it is being used.

const LeaderboardItem = ({position}) => {
  <li className="item" style={{"--position": position}}>
    <img src="..." alt="Roi's avatar" />
    <h2>Roi</h2>
  </li>;
};

Much cleaner, and I think this is closer to the value proposition of style queries. But at the same time, this example makes a large leap of assumption that we will get a range syntax for style queries at some point, which is not a done deal.

Conclusion

There are lots of teams working on making modern CSS better, and not all features have to be groundbreaking miraculous additions.

Size queries are definitely an upgrade from media queries for responsive design, but style queries appear to be more of a solution looking for a problem.

It simply doesn’t solve any specific issue or is better enough to replace other approaches, at least as far as I am aware.

Even if, in the future, style queries will be able to check for any property, that introduces a whole new can of worms where styles are capable of reacting to other styles. This seems exciting at first, but I can’t shake the feeling it would be unnecessary and even chaotic: styles reacting to styles, reacting to styles, and so on with an unnecessary side of boilerplate. I’d argue that a more prudent approach is to write all your styles declaratively together in one place.

Maybe it would be useful for web extensions (like Dark Reader) so they can better check styles in third-party websites? I can’t clearly see it. If you have any suggestions on how CSS Container Style Queries can be used to write better CSS that I may have overlooked, please let me know in the comments! I’d love to know how you’re thinking about them and the sorts of ways you imagine yourself using them in your work.

]]>
hello@smashingmagazine.com (Juan Diego Rodríguez)
<![CDATA[2-Page Login Pattern, And How To Fix It]]> https://smashingmagazine.com/2024/06/2-page-login-pattern-how-fix-it/ https://smashingmagazine.com/2024/06/2-page-login-pattern-how-fix-it/ Thu, 13 Jun 2024 18:00:00 GMT Why do we see login forms split into multiple screens everywhere? Instead of typing email and password, we have to type email, move to the next page, and then type password there. This seems to be inefficient, to say the least.

Let’s see why login forms are split across screens, what problem they solve, and how to design a better experience for better authentication UX (video).

This article is part of our ongoing series on design patterns. It’s also an upcoming part of the 10h-video library on Smart Interface Design Patterns 🍣 and the upcoming live UX training as well. Use code BIRDIE to save 15% off.

The Problem With Login Forms

If there is one thing we’ve learned over the years in UX, it’s that designing for people is hard. This applies to login forms as well. People are remarkably forgetful. They often forget what email they signed up with or what service they signed in with last time (Google, Twitter, Apple, and so on)

One idea is to remind customers what they signed in with last time and perhaps make it a default option. However, it reveals directly what the user’s account was, which might be a privacy or security issue:

What if instead of showing all options to all customers all the time, we ask for email first, and then look up what service they used last time, and redirect customers to the right place automatically? Well, that’s exactly the idea behind 2-page logins.

Meet 2-Page-Logins

You might have seen them already. If a few years ago, most login forms asked for email and password on one page, these days it’s more common to ask only for email first. When the user chooses to continue, the form will ask for a password in a separate step. Brad explores some problems of this pattern.

A common reason for splitting the login form across pages is Single Sign-On (SSO) authentication. Large companies typically use SSO for corporate sign-ins of their employees. With it, employees log in only once every day and use only one set of credentials, which improves enterprise security.

The UX Intricacies of Single Sign-On (SSO)

SSO also helps with regulatory compliance, and it’s much easier to provision users with appropriate permissions and revoke them later at once. So, if an employee leaves, all their accounts and data can be deleted at once.

To support both business customers and private customers, companies use 2-step-login. Users need to type in their email first, then the validator checks what provider the email is associated with and redirects users there.

Users rarely love this experience. Sometimes, they have multiple accounts (private and business) with one service. Also, 2-step-logins often break autofill and password managers. And for most users, login/pass is way faster than 2-step-login.

Of course, typically, there are dedicated corporate login pages for employees to sign in, but they often head directly to Gmail, Figma, and so on instead and try to sign in there. However, they won’t be able to log in as they must sign in through SSO.

Bottom line: the pattern works well for SSO users, but for non-SSO users, it results in a frustrating UX.

Alternative Solution: Conditional Reveal of SSO

There is a way to work around these challenges (see the image below). We could use a single-page look-up with email and password input fields as a default. Once a user has typed in their email, we detect if the SSO authentication is enabled.

If Single Sign-On (SSO) is enabled for that email, we show a Single Sign-On option and default to it. We could also make the password field optional or disabled.

If SSO isn’t enabled for that email, we proceed with the regular email/password login. This is not much hassle, but it saves trouble for both private and business accounts.

Key Takeaways

🤔 People often forget what email they signed up with.
🤔 They also forget the auth service they signed in with.
🤔 Companies use Single Sign-On (SSO) for corporate sign-in.
🤔 Individual accounts still need email and password for login.
✅ 2-step login: ask for email, then redirect to the right service.

✅ 2-step-login replaces “social” sign-in for repeat users.
✅ It directs users rather than giving them roadblocks.
🤔 Users still keep forgetting the email they signed in with.
🤔 Sometimes, users have multiple accounts with one service.
🚫 2-step logins often break autofill and password managers.
🚫 For most users, login/pass is way faster than 2-step-login.

✅ Better: start with one single page with login and password.
✅ As users type their email, detect if SSO is enabled for them.
✅ If it is, reveal an SSO-login option and set a default to it.
✅ Otherwise, proceed with the regular password login.
✅ If users must use SSO, disable the password field — don’t hide it.

Wrapping Up

Personally, I haven’t tested the approach, but it might be a good alternative to 2-page logins — both for SSO and non-SSO users. Keep in mind, though, that SSO authentication might or might not require a password, as sometimes login happens via Yubikey or Touch-ID or third parties (e.g., OAuth).

Also, eventually, users will be locked out; it’s just a matter of time. So, do use magic links for password recovery or access recovery, but don’t mandate it as a regular login option. Switching between applications is slow and causes mistakes. Instead, nudge users to enable 2FA: it’s both usable and secure.

And most importantly, test your login flow with the tools that your customers rely on. You might be surprised how broken their experience is if they rely on password managers or security tools to log in. Good luck, everyone!

Useful Resources

Meet Smart Interface Design Patterns

If you are interested in similar insights around UX, take a look at Smart Interface Design Patterns, our 10h-video course with 100s of practical examples from real-life projects — with a live UX training later this year. Everything from mega-dropdowns to complex enterprise tables — with 5 new segments added every year. Jump to a free preview.

Meet Smart Interface Design Patterns, our video course on interface design & UX.

100 design patterns & real-life examples.
10h-video course + live UX training. Free preview.

]]>
hello@smashingmagazine.com (Vitaly Friedman)
<![CDATA[The Scent Of UX: The Unrealized Potential Of Olfactory Design]]> https://smashingmagazine.com/2024/06/scent-ux-unrealized-potential-olfactory-design/ https://smashingmagazine.com/2024/06/scent-ux-unrealized-potential-olfactory-design/ Wed, 12 Jun 2024 13:00:00 GMT Imagine that you could smell this page. The introduction would emit a subtle scent of sage and lavender to set the mood. Each paragraph would fill your room with the coconut oil aroma, helping you concentrate and immerse in reading. The fragrance of the comments section, resembling a busy farmer’s market, would nudge you to share your thoughts and debate with strangers.

How would the presence of smells change your experience reading this text or influence your takeaways?

Scents are everywhere. They fill our spaces, bind our senses to objects and people, alert us to dangers, and arouse us. Smells have so much influence over our mood and behavior that hundreds of companies are busy designing fragrances for retail, enticing visitors to purchase more, hotels, making customers feel at home, and amusement parks, evoking a warm sense of nostalgia.

At the same time, the digital world, where we spend our lives working, studying, shopping, and resting, remains entirely odorless. Our smart devices are not designed to emit or recognize scents, and every corner of the Internet, including this page, smells exactly the same.

We watch movies, play games, study, and order dinner, but our sense of smell is left unengaged. The lack of odors rarely bothers us, but occasionally, we choose analog things like books merely because their digital counterparts fail to connect with us at the same level.

Could the presence of smells improve our digital experiences? What would it take to build the “smelly” Internet, and why hasn't it been done before? Last but not least, what power do scents hold over our senses, memory, and health, and how could we harness it for the digital world?

Let’s dive deep into a fascinating and underexplored realm of odors.

Olfactory Design For The Real World

Why Do We Remember Smells?

In his novel In Search of Lost Time, French writer Marcel Proust describes a sense of déjà vu he experienced after tasting a piece of cake dipped in tea:

“Immediately the old gray house upon the street rose up like a stage set… the house, the town, the square where I was sent before lunch, the streets along which I used to run errands, the country roads we took… the whole of Combray and of its surroundings… sprang into being, town and gardens alike, all from my cup of tea.”

— Marcel Proust

The Proust Effect, the phenomenon of an ‘involuntary memory’ evoked by scents, is a common occurrence. It explains how the presence of a familiar smell activates areas in our brain responsible for odor recognition, causing us to experience a strong, warm, positive sense of nostalgia.

Smells have a potent and almost magical impact on our ability to remember and recognize objects and events. “The nose makes the eyes remember”, as a renowned Finnish architect Juhani Pallasmaa puts it: a single droplet of a familiar fragrance is often enough to bring up a wild cocktail of emotions and recollections, even those that have long been forgotten.

A memory of a place, a person, or an experience is often a memory of their smell that lingers long after the odor is gone. J. Douglas Porteous, Professor of Geography at the University of Victoria, coined the term Smellscape to describe how a collective of smells in each particular area form our perception, define our attitude, and craft our recollection of it.

To put it simply, we choose to avoid beautiful places and forget delicious meals when their odors are not to our liking. Pleasant aromas, on the other hand, alter our memory, make us overlook flaws and defects, or even fall in love.

With such an immense power that scents hold over our perception of reality, it comes as no surprise they have long become a tool in the hands of brand and service designers.

Scented Advertising

What do a luxury car brand, a cosmetics store, and a carnival ride have in common? The answer is that they all have their own distinct scents.

Carefully crafted fragrances are widely used to create brand identities, make powerful impressions, and differentiate brands “emotionally and memorably”.

Some choose to complement visual identities with subtle, tailored aromas. 12.29, a creative “olfactive branding company,” developed the “scent identity” for Cadillac, a “symbol of self-expression representing the irrepressible pursuit of life.”

The branded Cadillac scent is diffused in dealerships and auto shows around the world, evoking a sense of luxury and class. Customers are expected to remember Cadillac better for its “signature nutty coffee, dark leather, and resinous amber notes”, forging a strong emotional connection with the brand.

Next time they think of Cadillac, their brain will recall its signature fragrance and the way it made them feel. Cadillac is ready to bet they will not even consider other brands afterwards.

Others may be less subtle and employ more aggressive, fragrant marketing tactics. LUSH, a British cosmetics retailer, is known for its distinct smells. Although even the company co-founder admits that odors can be overwhelming for some, LUSH’s scents play an important role in crafting the brand’s identity.

Indeed, the aroma of their stores is so recognizable that it lures customers in from afar with ease, and few walk away without forever remembering the brand’s distinct smell.

However, retail is not the only area that employs discernible smells.

Disney takes a holistic approach to service design, carefully considering every aspect that influences customer satisfaction. Smells have long been a part of the signature “Disney experience”: the main street smells like pastry and popcorn, Spaceship Earth is filled with the burning wood aroma, and Soarin’ is accompanied by notes of orange and pine.

Dozens of scent-emitting devices, Smellitzers, are responsible for adding scents to each experience. Deployed around each park and perfectly synced with every other sensory stimulus, they “shoot scents toward passersby” and “trigger memories of childhood nostalgia.”

As shown in the patent, Smellitzer is a rather simple odor delivery system designed to “enhance the sense of flight created in the minds of the passengers.” Scents are carefully curated and manufactured to evoke precise emotions without disrupting the ride experience.

Disney’s attractions, lanes, and theaters are packed with smell-emitting gadgets that distribute sweet and savoury notes. The visitors barely notice the presence of added scents, but later inevitably experience a sudden but persistent urge to return to the park.

Could it be something in the air, perhaps?

Well-curated, timely delivered, recognizable scents can be a powerful ally in the hands of a designer.

They can soothe a passenger during a long flight with the subtle notes of chamomile and mint or seduce a hungry shopper with the familiar aroma of freshly baked cinnamon buns. Scents can create and evoke great memories, amplify positive emotions, or turn casual buyers into eager and loyal consumers.

Unfortunately, smells can also ruin otherwise decent experiences.

Scented Entertainment

Why Fragrant Cinema Failed

In 1912, Aldous Huxley, author of the dystopian novel Brave New World, published an essay “Silence is Golden”, reflecting on his first experience watching a sound film. Huxley despised cinema, calling it the “most frightful creation-saving device for the production of standardized amusement”, and the addition of sound made the writer concerned for the future of entertainment. Films engaged multiple senses but demanded no intellectual involvement, becoming more accessible, more immersive, and, as Huxley feared, more influential.

“Brave New World,” published in 1932, features the cinema of the future — a multisensory entertainment complex designed to distract society from seeking a deeper sense of purpose in life. Attendees enjoy a ​​“scent organ” playing “a delightfully refreshing Herbal Capriccio — rippling arpeggios of thyme and lavender, of rosemary, basil, myrtle, tarragon,” and get to experience every physical stimulation imaginable.

Huxley’s critical take on the state of the entertainment industry was spot-on. Obsessed with the idea of multisensory entertainment, studios did not take long to begin investing in immersive experiences. The 1950s were the age of experiments designed to attract more viewers: colored cinema, 3D films, and, of course, scented movies.

In 1960, two films hit the American theaters: Scent of Mystery, accompanied by the odor-delivery technology called “Smell–O–Vision”, and Behind the Great Wall, employing the process named AromaRama. Smell–O–Vision was designed to transport scents through tubes to each seat, much like Disney’s Smellitzers, whereas AromaRama distributed smells through the theater’s ventilation.

Both scented movies were panned by critics and viewers alike. In his review for the New York Times, Bosley Crowther wrote that “...synthetic smells [...] occasionally befit what one is viewing, but more often they confuse the atmosphere”. Audiences complained about smells being either too subtle or too overpowering and the machines disrupting the viewing experience.

The groundbreaking technologies were soon forgotten, and all plans to release more scented films were scrapped.

Why did odors, so efficient at manufacturing nostalgic memories of an amusement park, fail to entertain the audience at the movies? On the one hand, it may attributed to the technological limitations of the time. For instance, AromaRama diffused the smells into the ventilation, which significantly delayed the delivery and required scents to be removed between scenes. Suffice it to say the viewers did not enjoy the experience.

However, there could be other possible explanations.

First of all, digital entertainment is traditionally odorless. Viewers do not anticipate movies to be accompanied by smells, and their brains are conditioned to ignore them. Researchers call it “inattentional anosmia”: people connect their enjoyment with what they see on the screen, not what they smell or taste.

Moreover, background odors tend to fade and become less pronounced with time. A short exposure to a pleasant odor may be complimentary. For instance, viewers could smell orange as the character in “Behind the Great Wall” cut and squeezed the fruit: an “impressive” moment, as admitted by critics. However, left to linger, even the most pleasant scents can leave the viewer uninvolved or irritated.

Finally, cinema does not require active sensory involvement. Viewers sit still in silence, rarely even moving their heads, while their sight and hearing are busy consuming and interpreting the information. Immersion requires suspension of disbelief: well-crafted films force the viewer to forget the reality around them, but the addition of scents may disrupt this state, especially if scents are not relevant or well-crafted.

For the scented movie to engage the audience, smells must be integrated into the film’s events and play an important role in the viewing experience. Their delivery must be impeccable: discreet, smooth, and perfectly timed. In time, perhaps, we may see the revival of scented cinema. Until then, rare auteur experiments and 4D–cinema booths at carnivals will remain the only places where fragrant films will live on.

Fortunately, the lessons from the early experiments helped others pave the way for the future of fragrant entertainment.

Immersive Gaming

Unlike movies, video games require active participation. Players are involved in crafting the narrative of the game and, as such, may expect (and appreciate) a higher degree of realism. Virtual Reality is a good example of technology designed for full sensory stimulation.

Modern headsets are impressive, but several companies are already working hard on the next-gen tech for immersive gaming. Meta and Manus are developing gloves that make virtual elements tangible. Teslasuit built a full-body suit that captures motion and biometry, provides haptic feedback, and emulates sensations for objects in virtual reality. We may be just a few steps away from virtual multi-sensory entertainment being as widespread as mobile phones.

Scents are coming to VR, too, albeit at a slower pace, with a few companies already selling devices for fragrant entertainment. For instance, GameScent has developed a cube that can distribute up to 8 smells, from “gunfire” and “explosion” to “forest” and “storm”, using AI to sync the odors with the events in the game.

The vast majority of experiments, however, occur in the labs, where researchers attempt to understand how smells impact gamers and test various concepts. Some assign smells to locations in a VR game and distribute them to players; others have the participants use a hand-held device to “smell” objects in the game.

The majority of studies demonstrate promising results. The addition of fragrances creates a deeper sense of immersion and enhances realism in virtual reality and in a traditional gaming setting.

A notable example of the latter is “Tainted”, an immersive game based on South-East Asian folklore, developed by researchers in 2017. The objective of the game is to discover and burn banana trees, where the main antagonist of the story — a mythical vengeful spirit named Pontianak — is traditionally believed to hide.

The way “Tainted” incorporates smells into the gameplay is quite unique. A scent-emitting module, placed in front of the player, diffuses fragrances to complement the narrative. For instance, the smell of banana signals the ghost’s presence, whereas pineapple aroma means that a flammable object required to complete the quest is nearby. Odors inform the player of dangers, give directions, and become an integral part of the gaming experience, like visuals and sound.

Some of the most creative examples of scented learning come from places that combine education and entertainment, most notably, museums.

Jorvik Viking Centre is famous for its use of “smells of Viking-age York” to capture the unique atmosphere of the past. Its scented halls, holograms, and entertainment programs turn a former archeological site into a carnival ride that teleports visitors into the 10th century to immerse them into the daily life of the Vikings.

Authentic smells are the center’s distinct feature, an integral part of its branding and marketing, and an important addition to its collection. Smells are responsible for making Jorvik exhibitions so memorable, and hopefully, for visitors walking away with a few Viking trivia facts firmly stuck in their heads.

At the same time, learning is becoming increasingly more digital, from mobile apps for foreign languages to student portals and online universities. Smart devices strive to replace classrooms with their analog textbooks, papers, gel pens, and teachers. Virtual Reality is a step towards the future of immersive digital education, and odors may play a more significant role in making it even more efficient.

Education will undoubtedly continue leveraging the achievements of the digital revolution to complement its existing tools. Tablets and Kindles are on their way to replace textbooks and pens. Phones are no longer deemed a harmful distraction that causes brain cancer.

Odors, in turn, are becoming “learning supplements”. Teachers and parents have access to personalized diffusers that distribute the smell of peppermint to enhance students’ attention. Large scent-emitting devices for educational facilities are available on the market, too.

At the same time, inspired to figure out the way to upload knowledge straight into our brains, we’ve discovered a way to learn things in our sleep using smells. Several studies have shown that exposure to scents during sleep significantly improves cognitive abilities and memory. More than that, smells can activate our memory while we sleep and solidify what we have learnt while awake.

Odors may not replace textbooks and lectures, but their addition will make remembering and recalling things significantly easier. In fact, researchers from MIT built and tested a wearable scent-emitting device that can be used for targeted memory reactivation.

In time, we will undoubtedly see more smart devices that make use of scents for memory enhancement, training, and entertainment. Integrated into the ecosystems of gadgets, olfactory wearables and smart home appliances will improve our well-being, increase productivity, and even detect early symptoms of illnesses.

There is, however, a caveat.

The Challenging UX Of Scents

We know very little about smells.

Until 2004, when Richard Axel and Linda Buck received a Nobel Prize for identifying the genes that control odor receptors, we didn’t even know how our bodies processed smells or that different areas in our brains were activated by different odors.

We know that our experience with smells is deep and intimate, from the memories they create to the emotions they evoke. We are aware that unpleasant scents linger longer and have a stronger impact on our mental state and memory. Finally, we understand that intensity, context, and delivery matter as much as the scent itself and that a decent aroma diffused out of place ruins the experience.

Thus, if we wish to build devices that make the best use of scents, we need to follow a few simple principles.

Design Principle #1: Tailor The Scents To Each User

In his article about Smellscapes, J. Douglas Porteous writes:

“The smell of a certain institutional soap may carry a person back to the purgatory of boarding school. A particular floral fragrance reminds one of a lost love. A gust of odour from an ethnic spice emporium may waft one back, in memory, to Calcutta.”

— J. Douglas Porteous

Smells revive hidden memories and evoke strong emotions, but their connection to our minds is deeply personal. A rich, spicy aroma of freshly roasted coffee beans will not have the same impact on different people, and in order to use scents in learning, we need to tailor the experience to each user.

In order to maximize the potential of odors in immersion and learning, we need to understand which smells have the most impact on the user. By filtering out the smells that the user finds unpleasant or associates with sad events in their past, we can reduce any potential negative effect on their wellness or memory.

Design Principle #2: Stick To The Simpler Smells

Humans are notoriously bad at describing odors.

Very few languages in the world feature specific terms for smells. For instance, the speakers of Jahai, a language in Malaysia, enjoy the privilege of having specific names for scents like “bloody smell that attracts tigers” and “wild mango, wild ginger roots, bat caves, and petrol”.

English, on the other hand, often uses adjectives associated with flavor (“smoky vanilla”) or comparison (“smells like orange”) to describe scents. For centuries, we have been trying to work out a system that could help cluster odors.

Aristotle classified all odors into six groups: sweet, acid, severe, fatty, sour, and fetid (unpleasant). Carl Linnaeus expanded it to 7 types: aromatic, fragrant, alliaceous (garlic), ambrosial (musky), hircinous (goaty), repulsive, and nauseous. Hans Henning arranged all scent groups in a prism. None of the existing classifications, however, help accurately describe complex smells, which inevitably makes it harder to recreate them.

Academics have developed several comprehensive lists, for instance, the Odor Character Profiling that contains 146 unique descriptors. Pleasant smells from the list are easier to reproduce than unique and sophisticated odors.

Although an aroma of the “warm touch of an early summer sun” may work better for a particular user than the smell of an apple pie, the high price of getting the scent wrong makes it a reasonable trade-off.

Design Principle #3: Ensure Stable And Convenient Delivery

Nothing can ruin a good olfactory experience more than an imperfect delivery system.

Disney’s Smellitzers and Jorvik’s scented exhibition set the standard for discreet, contextual, and consistent inclusion of smells to complement the experience. Their diffusers are well-concealed, and odors do not come off as overwhelming or out of place.

On the other hand, the failure of scented movies from the 1950s can at least partially be attributed to poorly designed aroma delivery systems. Critics remembered that even the purifying treatment that was used to clear the theater air between scenes left a “sticky, sweet” and “upsetting” smell.

Good delivery systems are often simple and focus on augmenting the experience without disrupting it. For instance, eScent, a scent-enhanced FFP3 mask, is engineered to reduce stress and improve the well-being of frontline workers. The mask features a slot for applicators infused with essential oil; users can choose fragrances and swap the applicator whenever they want. Beside that, eScent is no different from its “analog” predecessor: it does not require special equipment or preparation, and the addition of smells does not alter the experience of wearing a mask.

In The Not Too Distant Future

We may know little about smells, but we are steadily getting closer to harnessing their power.

In 2022, Alex Wiltschko, a former Google staff research scientist, founded Osmo, a company dedicated to “giving computers a sense of smell.” In the long run, Osmo aspires to use its knowledge to manufacture scents on demand from sustainable synthetic materials.

Today, the company operates as a research lab, using a trained AI to predict the smell of a substance by analyzing its molecular structure. Osmo’s first tests demonstrated some promising results, with machine accurately describing the scents in 53% of cases.

Should Osmo succeed at building a machine capable of recognizing and predicting smells, it will change the digital world forever. How will we interact with our smart devices? How will we use their newly discovered sense of smell to exchange information, share precious memories with each other, or relive moments from the past? Is now the right time for us to come up with ideas, products, and services for the future?

Odors are a booming industry that offers designers and engineers a unique opportunity to explore new and brave concepts. With the help of smells, we can transform entire industries, from education to healthcare, crafting immersive multi-sensory experiences for learning and leisure.

Smells are a powerful tool that requires precision and perfection to reach the desired effect. Our past shortcomings may have tainted the reputation of scented experiences, but recent progress demonstrates that we have learnt our lessons well. Modern technologies make it even easier to continue the explorations and develop new ways to use smells in entertainment, learning, and wellness — in the real world and beyond.

Our digital spaces may be devoid of scents, but they will not remain odorless for long.

]]>
hello@smashingmagazine.com (Kristian Mikhel)
<![CDATA[How To Hack Your Google Lighthouse Scores In 2024]]> https://smashingmagazine.com/2024/06/how-hack-google-lighthouse-scores-2024/ https://smashingmagazine.com/2024/06/how-hack-google-lighthouse-scores-2024/ Tue, 11 Jun 2024 18:00:00 GMT This article is a sponsored by Sentry.io

Google Lighthouse has been one of the most effective ways to gamify and promote web page performance among developers. Using Lighthouse, we can assess web pages based on overall performance, accessibility, SEO, and what Google considers “best practices”, all with the click of a button.

We might use these tests to evaluate out-of-the-box performance for front-end frameworks or to celebrate performance improvements gained by some diligent refactoring. And you know you love sharing screenshots of your perfect Lighthouse scores on social media. It’s a well-deserved badge of honor worthy of a confetti celebration.

Just the fact that Lighthouse gets developers like us talking about performance is a win. But, whilst I don’t want to be a party pooper, the truth is that web performance is far more nuanced than this. In this article, we’ll examine how Google Lighthouse calculates its performance scores, and, using this information, we will attempt to “hack” those scores in our favor, all in the name of fun and science — because in the end, Lighthouse is simply a good, but rough guide for debugging performance. We’ll have some fun with it and see to what extent we can “trick” Lighthouse into handing out better scores than we may deserve.

But first, let’s talk about data.

Field Data Is Important

Local performance testing is a great way to understand if your website performance is trending in the right direction, but it won’t paint a full picture of reality. The World Wide Web is the Wild West, and collectively, we’ve almost certainly lost track of the variety of device types, internet connection speeds, screen sizes, browsers, and browser versions that people are using to access websites — all of which can have an impact on page performance and user experience.

Field data — and lots of it — collected by an application performance monitoring tool like Sentry from real people using your website on their devices will give you a far more accurate report of your website performance than your lab data collected from a small sample size using a high-spec super-powered dev machine under a set of controlled conditions. Philip Walton reported in 2021 that “almost half of all pages that scored 100 on Lighthouse didn’t meet the recommended Core Web Vitals thresholds” based on data from the HTTP Archive.

Web performance is more than a single core web vital metric or Lighthouse performance score. What we’re talking about goes way beyond the type of raw data we’re working with.

Web Performance Is More Than Numbers

Speed is often the first thing that comes up when talking about web performance — just how long does a page take to load? This isn’t the worst thing to measure, but we must bear in mind that speed is probably influenced heavily by business KPIs and sales targets. Google released a report in 2018 suggesting that the probability of bounces increases by 32% if the page load time reaches higher than three seconds, and soars to 123% if the page load time reaches 10 seconds. So, we must conclude that converting more sales requires reducing bounce rates. And to reduce bounce rates, we must make our pages load faster.

But what does “load faster” even mean? At some point, we’re physically incapable of making a web page load any faster. Humans — and the servers that connect them — are spread around the globe, and modern internet infrastructure can only deliver so many bytes at a time.

The bottom line is that page load is not a single moment in time. In an article titled “What is speed?” Google explains that a page load event is:

[…] “an experience that no single metric can fully capture. There are multiple moments during the load experience that can affect whether a user perceives it as ‘fast’, and if you just focus solely on one, you might miss bad experiences that happen during the rest of the time.”

The key word here is experience. Real web performance is less about numbers and speed than it is about how we experience page load and page usability as users. And this segues nicely into a discussion of how Google Lighthouse calculates performance scores. (It’s much less about pure speed than you might think.)

How Google Lighthouse Performance Scores Are Calculated

The Google Lighthouse performance score is calculated using a weighted combination of scores based on core web vital metrics (i.e., First Contentful Paint (FCP), Largest Contentful Paint (LCP), Cumulative Layout Shift (CLS)) and other speed-related metrics (i.e., Speed Index (SI) and Total Blocking Time (TBT)) that are observable throughout the page load timeline.

This is how the metrics are weighted in the overall score:

Metric Weighting (%)
Total Blocking Time 30
Cumulative Layout Shift 25
Largest Contentful Paint 25
First Contentful Paint 10
Speed Index 10

The weighting assigned to each score gives us insight into how Google prioritizes the different building blocks of a good user experience:

1. A Web Page Should Respond to User Input

The highest weighted metric is Total Blocking Time (TBT), a metric that looks at the total time after the First Contentful Paint (FCP) to help indicate where the main thread may be blocked long enough to prevent speedy responses to user input. The main thread is considered “blocked” any time there’s a JavaScript task running on the main thread for more than 50ms. Minimizing TBT ensures that a web page responds to physical user input (e.g., key presses, mouse clicks, and so on).

2. A Web Page Should Load Useful Content With No Unexpected Visual Shifts

The next most weighted Lighthouse metrics are Largest Contentful Paint (LCP) and Cumulative Layout Shift (CLS). LCP marks the point in the page load timeline when the page’s main content has likely loaded and is therefore useful.

At the point where the main content has likely loaded, you also want to maintain visual stability to ensure that users can use the page and are not affected by unexpected visual shifts (CLS). A good LCP score is anything less than 2.5 seconds (which is a lot higher than we might have thought, given we are often trying to make our websites as fast as possible).

3. A Web Page Should Load Something

The First Contentful Paint (FCP) metric marks the first point in the page load timeline where the user can see something on the screen, and the Speed Index (SI) measures how quickly content is visually displayed during page load over time until the page is “complete”.

Your page is scored based on the speed indices of real websites using performance data from the HTTP Archive. A good FCP score is less than 1.8 seconds and a good SI score is less than 3.4 seconds. Both of these thresholds are higher than you might expect when thinking about speed.

Usability Is Favored Over Raw Speed

Google Lighthouse’s performance scoring is, without a doubt, less about speed and more about usability. Your SI and FCP could be super quick, but if your LCP takes too long to paint, and if CLS is caused by large images or external content taking some time to load and shifting things visually, then your overall performance score will be lower than if your page was a little slower to render the FCP but didn’t cause any CLS. Ultimately, if the page is unresponsive due to JavaScript blocking the main thread for more than 50ms, your performance score will suffer more than if the page was a little slow to paint the FCP.

To understand more about how the weightings of each metric contribute to the final performance score, you can play about with the sliders on the Lighthouse Scoring Calculator, and here’s a rudimentary table demonstrating the effect of skewed individual metric weightings on the overall performance score, proving that page usability and responsiveness is favored over raw speed.

Description FCP (ms) SI (ms) LCP (ms) TBT (ms) CLS Overall Score
Slow to show something on screen 6000 0 0 0 0 90
Slow to load content over time 0 5000 0 0 0 90
Slow to load the largest part of the page 0 0 6000 0 0 76
Visual shifts occurring during page load 0 0 0 0 0.82 76
Page is unresponsive to user input 0 0 0 2000 0 70

The overall Google Lighthouse performance score is calculated by converting each raw metric value into a score from 0 to 100 according to where it falls on its Lighthouse scoring distribution, which is a log-normal distribution derived from the performance metrics of real website performance data from the HTTP Archive. There are two main takeaways from this mathematically overloaded information:

  1. Your Lighthouse performance score is plotted against real website performance data, not in isolation.
  2. Given that the scoring uses log-normal distribution, the relationship between the individual metric values and the overall score is non-linear, meaning you can make substantial improvements to low-performance scores quite easily, but it becomes more difficult to improve an already high score.

Read more about how metric scores are determined, including a visualization of the log-normal distribution curve on developer.chrome.com.

Can We “Trick” Google Lighthouse?

I appreciate Google’s focus on usability over pure speed in the web performance conversation. It urges developers to think less about aiming for raw numbers and more about the real experiences we build. That being said, I’ve wondered whether today in 2024, it’s possible to fool Google Lighthouse into believing that a bad page in terms of usability and usefulness is actually a great one.

I put on my lab coat and science goggles to investigate. All tests were conducted:

  • Using the Chromium Lighthouse plugin,
  • In an incognito window in the Arc browser,
  • Using the “navigation” and “mobile” settings (apart from where described differently),
  • By me, in a lab (i.e., no field data).

That all being said, I fully acknowledge that my controlled test environment contradicts my advice at the top of this post, but the experiment is an interesting ride nonetheless. What I hope you’ll take away from this is that Lighthouse scores are only one piece — and a tiny one at that — of a very large and complex web performance puzzle. And, without field data, I’m not sure any of this matters anyway.

How to Hack FCP and LCP Scores

TL;DR: Show the smallest amount of LCP-qualifying content on load to boost the FCP and LCP scores until the Lighthouse test has likely finished.

FCP marks the first point in the page load timeline where the user can see anything at all on the screen, while LCP marks the point in the page load timeline when the main page content (i.e., the largest text or image element) has likely loaded. A fast LCP helps reassure the user that the page is useful. “Likely” and “useful” are the important words to bear in mind here.

What Counts as an LCP Element

The types of elements on a web page considered by Lighthouse for LCP are:

  • <img> elements,
  • <image> elements inside an <svg> element,
  • <video> elements,
  • An element with a background image loaded using the url() function, (and not a CSS gradient), and
  • Block-level elements containing text nodes or other inline-level text elements.

The following elements are excluded from LCP consideration due to the likelihood they do not contain useful content:

  • Elements with zero opacity (invisible to the user),
  • Elements that cover the full viewport (likely to be background elements), and
  • Placeholder images or other images with low entropy (i.e., low informational content, such as a solid-colored image).

However, the notion of an image or text element being useful is completely subjective in this case and generally out of the realm of what machine code can reliably determine. For example, I built a page containing nothing but a <h1> element where, after 10 seconds, JavaScript inserts more descriptive text into the DOM and hides the <h1> element.

Lighthouse considers the heading element to be the LCP element in this experiment. At this point, the page load timeline has finished, but the page’s main content has not loaded, even though Lighthouse thinks it is likely to have loaded within those 10 seconds. Lighthouse still awards us with a perfect score of 100 even if the heading is replaced by a single punctuation mark, such as a full stop, which is even less useful.

This test suggests that if you need to load page content via client-side JavaScript, we‘ll want to avoid displaying a skeleton loader screen since that requires loading more elements on the page. And since we know the process will take some time — and that we can offload the network request from the main thread to a web worker so it won’t affect the TBT — we can use some arbitrary “splash screen” that contains a minimal viable LCP element (for better FCP scoring). This way, we’re giving Lighthouse the impression that the page is useful to users quicker than it actually is.

All we need to do is include a valid LCP element that contains something that counts as the FCP. While I would never recommend loading your main page content via client-side JavaScript in 2024 (serve static HTML from a CDN instead or build as much of the page as you can on a server), I would definitely not recommend this “hack” for a good user experience, regardless of what the Lighthouse performance score tells you. This approach also won’t earn you any favors with search engines indexing your site, as the robots are unable to discover the main content while it is absent from the DOM.

I also tried this experiment with a variety of random images representing the LCP to make the page even less useful. But given that I used small file sizes — made smaller and converted into “next-gen” image formats using a third-party image API to help with page load speed — it seemed that Lighthouse interpreted the elements as “placeholder images” or images with “low entropy”. As a result, those images were disqualified as LCP elements, which is a good thing and makes the LCP slightly less hackable.

View the demo page and use Chromium DevTools in an incognito window to see the results yourself.

This hack, however, probably won’t hold up in many other use cases. Discord, for example, uses the “splash screen” approach when you hard-refresh the app in the browser, and it receives a sad 29 performance score.

Compared to my DOM-injected demo, the LCP element was calculated as some content behind the splash screen rather than elements contained within the splash screen content itself, given there were one or more large images in the focussed text channel I tested on. One could argue that Lighthouse scores are less important for apps that are behind authentication anyway: they don’t need to be indexed by search engines.

There are likely many other situations where apps serve user-generated content and you might be unable to control the LCP element entirely, particularly regarding images.

For example, if you can control the sizes of all the images on your web pages, you might be able to take advantage of an interesting hack or “optimization” (in very large quotes) to arbitrarily game the system, as was the case of RentPath. In 2021, developers at RentPath managed to improve their Lighthouse performance score by 17 points when increasing the size of image thumbnails on a web page. They convinced Lighthouse to calculate the LCP element as one of the larger thumbnails instead of a Google Map tile on the page, which takes considerably longer to load via JavaScript.

The bottom line is that you can gain higher Lighthouse performance scores if you are aware of your LCP element and in control of it, whether that’s through a hack like RentPath’s or mine or a real-deal improvement. That being said, whilst I’ve described the splash screen approach as a hack in this post, that doesn’t mean this type of experience couldn’t offer a purposeful and joyful experience. Performance and user experience are about understanding what’s happening during page load, and it’s also about intent.

How to Hack CLS Scores

TL;DR: Defer loading content that causes layout shifts until the Lighthouse test has likely finished to make the test think it has enough data. CSS transforms do not negatively impact CLS, except if used in conjunction with new elements added to the DOM.

CLS is measured on a decimal scale; a good score is less than 0.1, and a poor score is greater than 0.25. Lighthouse calculates CLS from the largest burst of unexpected layout shifts that occur during a user’s time on the page based on a combination of the viewport size and the movement of unstable elements in the viewport between two rendered frames. Smaller one-off instances of layout shift may be inconsequential, but a bunch of layout shifts happening one after the other will negatively impact your score.

If you know your page contains annoying layout shifts on load, you can defer them until after the page load event has been completed, thus fooling Lighthouse into thinking there is no CLS. This demo page I created, for example, earns a CLS score of 0.143 even though JavaScript immediately starts adding new text elements to the page, shifting the original content up. By pausing the JavaScript that adds new nodes to the DOM by an arbitrary five seconds with a setTimeout(), Lighthouse doesn’t capture the CLS that takes place.

This other demo page earns a performance score of 100, even though it is arguably less useful and useable than the last page given that the added elements pop in seemingly at random without any user interaction.

Whilst it is possible to defer layout shift events for a page load test, this hack definitely won’t work for field data and user experience over time (which is a more important focal point, as we discussed earlier). If we perform a “time span” test in Lighthouse on the page with deferred layout shifts, Lighthouse will correctly report a non-green CLS score of around 0.186.

If you do want to intentionally create a chaotic experience similar to the demo, you can use CSS animations and transforms to more purposefully pop the content into view on the page. In Google’s guide to CLS, they state that “content that moves gradually and naturally from one position to another can often help the user better understand what’s going on and guide them between state changes” — again, highlighting the importance of user experience in context.

On this next demo page, I’m using CSS transform to scale() the text elements from 0 to 1 and move them around the page. The transforms fail to trigger CLS because the text nodes are already in the DOM when the page loads. That said, I did observe in my testing that if the text nodes are added to the DOM programmatically after the page loads via JavaScript and then animated, Lighthouse will indeed detect CLS and score things accordingly.

You Can’t Hack a Speed Index Score

The Speed Index score is based on the visual progress of the page as it loads. The quicker your content loads nearer the beginning of the page load timeline, the better.

It is possible to do some hack to trick the Speed Index into thinking a page load timeline is slower than it is. Conversely, there’s no real way to “fake” loading content faster than it does. The only way to make your Speed Index score better is to optimize your web page for loading as much of the page as possible, as soon as possible. Whilst not entirely realistic in the web landscape of 2024 (mainly because it would put designers out of a job), you could go all-in to lower your Speed Index as much as possible by:

  • Delivering static HTML web pages only (no server-side rendering) straight from a CDN,
  • Avoiding images on the page,
  • Minimizing or eliminating CSS, and
  • Preventing JavaScript or any external dependencies from loading.
You Also Can’t (Really) Hack A TBT Score

TBT measures the total time after the FCP where the main thread was blocked by JavaScript tasks for long enough to prevent responses to user input. A good TBT score is anything lower than 200ms.

JavaScript-heavy web applications (such as single-page applications) that perform complex state calculations and DOM manipulation on the client on page load (rather than on the server before sending rendered HTML) are prone to suffering poor TBT scores. In this case, you could probably hack your TBT score by deferring all JavaScript until after the Lighthouse test has finished. That said, you’d need to provide some kind of placeholder content or loading screen to satisfy the FCP and LCP and to inform users that something will happen at some point. Plus, you’d have to go to extra lengths to hack around the front-end framework you’re using. (You don’t want to load a placeholder page that, at some point in the page load timeline, loads a separate React app after an arbitrary amount of time!)

What’s interesting is that while we’re still doing all sorts of fancy things with JavaScript in the client, advances in the modern web ecosystem are helping us all reduce the probability of a less-than-stellar TBT score. Many front-end frameworks, in partnership with modern hosting providers, are capable of rendering pages and processing complex logic on demand without any client-side JavaScript. While eliminating JavaScript on the client is not the goal, we certainly have a lot of options to use a lot less of it, thus minimizing the risk of doing too much computation on the main thread on page load.

Bottom Line: Lighthouse Is Still Just A Rough Guide

Google Lighthouse can’t detect everything that’s wrong with a particular website. Whilst Lighthouse performance scores prioritize page usability in terms of responding to user input, it still can’t detect every terrible usability or accessibility issue in 2024.

In 2019, Manuel Matuzović published an experiment where he intentionally created a terrible page that Lighthouse thought was pretty great. I hypothesized that five years later, Lighthouse might do better; but it doesn’t.

On this final demo page I put together, input events are disabled by CSS and JavaScript, making the page technically unresponsive to user input. After five seconds, JavaScript flips a switch and allows you to click the button. The page still scores 100 for both performance and accessibility.

You really can’t rely on Lighthouse as a substitute for usability testing and common sense.

Some More Silly Hacks

As with everything in life, there’s always a way to game the system. Here are some more tried and tested guaranteed hacks to make sure your Lighthouse performance score artificially knocks everyone else’s out of the park:

  • Only run Lighthouse tests using the fastest and highest-spec hardware.
  • Make sure your internet connection is the fastest it can be; relocate if you need to.
  • Never use field data, only lab data, collected using the aforementioned fastest and highest-spec hardware and super-speed internet connection.
  • Rerun the tests in the lab using different conditions and all the special code hacks I described in this post until you get the result(s) you want to impress your friends, colleagues, and random people on the internet.

Note: The best way to learn about web performance and how to optimize your websites is to do the complete opposite of everything we’ve covered in this article all of the time. And finally, to seriously level up your performance skills, use an application monitoring tool like Sentry. Think of Lighthouse as the canary and Sentry as the real-deal production-data-capturing, lean, mean, web vitals machine.

And finally-finally, here’s the link to the full demo site for educational purposes.

]]>
hello@smashingmagazine.com (Salma Alam-Naylor)
<![CDATA[Useful CSS Tips And Techniques]]> https://smashingmagazine.com/2024/06/css-tips-and-techniques/ https://smashingmagazine.com/2024/06/css-tips-and-techniques/ Fri, 07 Jun 2024 11:00:00 GMT If you’ve been in the web development game for longer, you might recall the days when CSS was utterly confusing and you had to come up with hacks and workarounds to make things work. Luckily, these days are over and new features such as container queries, cascade layers, CSS nesting, the :has selector, grid and subgrid, and even new color spaces make CSS more powerful than ever before.

And the innovation doesn’t stop here. We also might have style queries and perhaps even state queries, along with balanced text-wrapping and CSS anchor positioning coming our way.

With all these lovely new CSS features on the horizon, in this post, we dive into the world of CSS with a few helpful techniques, a deep-dive into specificity, hanging punctuation, and self-modifying CSS variables. We hope they’ll come in handy in your work.

Cascade And Specificity Primer

Many fear the cascade and specificity in CSS. However, the concept isn’t as hard to get to grips with as one might think. To help you get more comfortable with two of the most fundamental parts of CSS, Andy Bell wrote a wonderful primer on the cascade and specificity.

The guide explains how certain CSS property types will be prioritized over others and dives deeper into specificity scoring to help you assess how likely it is that the CSS of a specific rule will apply. Andy uses practical examples to illustrate the concepts and simplifies the underlying mental model to make it easy to adopt and utilize. A power boost for your CSS skills.

Testing HTML With Modern CSS

Have you ever considered testing HTML with CSS instead of JavaScript? CSS selectors today are so powerful that it is actually possible to test for most kinds of HTML patterns using CSS alone. A proponent of the practice, Heydon Pickering summarized everything you need to know about testing HTML with CSS, whether you want to test accessibility, uncover HTML bloat, or check the general usability.

As Heydon points out, testing with CSS has quite some benefits. Particularly if you work in the browser and prefer exploring visual regressions and inspector information over command line logs, testing with CSS could be for you. It also shines in situations where you don’t have direct access to a client’s stack: Just provide a test stylesheet, and clients can locate instances of bad patterns you have identified for them without having to onboard you to help them do so. Clever!

Self-Modifying CSS Variables

The CSS spec for custom properties does not allow a custom property to reference itself — although there are quite some use cases where such a feature would be useful. To close the gap, Lea Verou proposed an inherit() function in 2018, which the CSSWG added to the specs in 2021. It hasn’t been edited-in yet, but Roman Komarov found a workaround that makes it possible to start involving its behavior.

Roman’s approach uses container-style queries as a way to access the previous state of a custom property. It can be useful when you want to cycle through various hues without having a static list of values, to match the border-radius visually, or to nest menu lists, for example. The workaround is still strictly experimental (so do not use it in production!), but since it is likely that style queries will gain broad browser support before inherit(), it has great potential.

Hanging Punctuation In CSS

hanging-punctuation is a neat little CSS property. It extends punctuation marks such as opening quotes to cater to nice, clean blocks of text. And while it’s currently only supported in Safari, it doesn’t hurt to include it in your code, as the property is a perfect example of progressive enhancement: It leaves things as they are in browsers that don’t support it and adds the extra bit of polish in browsers that do.

Jeremy Keith noticed an unintended side-effect of hanging-punctuation, though. When you apply it globally, it’s also applied to form fields. So, if the text in a form field starts with a quotation mark or some other piece of punctuation, it’s pushed outside the field and hidden. Jeremy shares a fix for it: Add input, textarea { hanging-punctuation: none; } to prevent your quotation marks from disappearing. A small tip that can save you a lot of headaches.

Fixing aspect-ratio Issues

The aspect-ratio property shines in fluid environments. It can handle anything from inserting a square-shaped <div> to matching the 16:9 size of a <video>, without you thinking in exact dimensions. And most of the time, it does so flawlessly. However, there are some things that can break aspect-ratio. Chris Coyier takes a closer look at three reasons why your aspect-ratio might not work as expected.

As Chris explains, one potential breakage is setting both dimensions — which might seem obvious, but it can be confusing if one of the dimensions is set from somewhere you didn’t expect. Stretching and content that forces height can also lead to unexpected results. A great overview of what to look out for when aspect-ratio breaks.

Masonry Layout With CSS

CSS Grid has taken layouts on the web to the next level. However, as powerful as CSS is today, not every layout that can be imagined is feasible. Masonry layout is one of those things that can’t be accomplished with CSS alone. To change that, the CSS Working Group is asking for your help.

There are currently two approaches in discussion at the CSS Working Group about how CSS should handle masonry-style layouts — and they are asking for insights from real-world developers and designers to find the best solution.

The first approach would expand CSS Grid to include masonry, and the second approach would be to introduce a masonry layout as a display: masonry display type. Jen Simmons summarized what you need to know about the ongoing debate and how you can contribute your thoughts on which direction CSS should take.

Before you come to a conclusion, also be sure to read Rachel Andrew’s post on the topic. She explains why the Chrome team has concerns about implementing a masonry layout as a part of the CSS Grid specification and clarifies what the alternate proposal enables.

Boost Your CSS Skills

If you’d like to dive deeper into CSS, we’ve got your back — with a few friendly events and SmashingConfs coming up this year:

We’d be absolutely delighted to welcome you to one of our special Smashing experiences — be it online or in person!

Smashing Weekly Newsletter

With our weekly newsletter, we aim to bring you useful, practical tidbits and share some of the helpful things that folks are working on in the web industry. There are so many talented folks out there working on brilliant projects, and we’d appreciate it if you could help spread the word and give them the credit they deserve!

Also, by subscribing, there are no third-party mailings or hidden advertising, and your support really helps us pay the bills. ❤️

Interested in sponsoring? Feel free to check out our partnership options and get in touch with the team anytime — they’ll be sure to get back to you as soon as they can.

]]>
hello@smashingmagazine.com (Cosima Mielke)
<![CDATA[Presenting UX Research And Design To Stakeholders: The Power Of Persuasion]]> https://smashingmagazine.com/2024/06/presenting-ux-research-design-stakeholders/ https://smashingmagazine.com/2024/06/presenting-ux-research-design-stakeholders/ Wed, 05 Jun 2024 15:00:00 GMT For UX researchers and designers, our journey doesn’t end with meticulously gathered data or well-crafted design concepts saved on our laptops or in the cloud. Our true impact lies in effectively communicating research findings and design concepts to key stakeholders and securing their buy-in for implementing our user-centered solutions. This is where persuasion and communication theory become powerful tools, empowering UX practitioners to bridge the gap between research and action.

I shared a framework for conducting UX research in my previous article on infusing communication theory and UX. In this article, I’ll focus on communication and persuasion considerations for presenting our research and design concepts to key stakeholder groups.

A Word On Persuasion: Guiding Understanding, Not Manipulation

UX professionals can strategically use persuasion techniques to turn complex research results into clear, practical recommendations that stakeholders can understand and act on. It’s crucial to remember that persuasion is about helping people understand what to do, not tricking them. When stakeholders see the value of designing with the user in mind, they become strong partners in creating products and services that truly meet user needs. We’re not trying to manipulate anyone; we’re trying to make sure our ideas get the attention they deserve in a busy world.

The Hovland-Yale Model Of Persuasion

The Hovland-Yale model, a framework for understanding how persuasion works, was developed by Carl Hovland and his team at Yale University in the 1950s. Their research was inspired by World War II propaganda, as they wanted to figure out what made some messages more convincing than others.

In the Hovland-Yale model, persuasion is understood as a process involving the Independent variables of Source, Message, and Audience. The elements of each factor then lead to the Audience having internal mediating processes around the topic, which, if independent variables are strong enough, can strengthen or change attitudes or behaviors. The interplay of the internal mediating processes leads to persuasion or not, which then leads to the observable effect of the communication (or not, if the message is ineffective). The model proposes that if these elements are carefully crafted and applied, the intended change in attitude or behavior (Effect) is more likely to be successful.

The diagram below helps identify the parts of persuasive communication. It shows what you can control as a presenter, how people think about the message and the impact it has. If done well, it can lead to change. I’ll focus exclusively on the independent variables in the far left side of the diagram in this article because, theoretically, this is what you, as the outside source creating a persuasive message, are in control of and, if done well, would lead to the appropriate mediating processes and desired observable effects.

Effective communication can reinforce currently held positions. You don’t always need to change minds when presenting research; much of what we find and present might align with currently held beliefs and support actions our stakeholders are already considering.

Over the years, researchers have explored the usefulness and limitations of this model in various contexts. I’ve provided a list of citations at the end of this article if you are interested in exploring academic literature on the Hovland-Yale model. Reflecting on some of the research findings can help shape how we create and deliver our persuasive communication. Some consistent from academia highlight that:

  • Source credibility significantly influences the acceptance of a persuasive message. A high-credibility source is more persuasive than a low-credibility one.
  • Messages that are logically structured, clear, and relatively concise are more likely to be persuasive.
  • An audience’s attitude change is also dependent on the channel of communication. Mass media is found to be less effective in changing attitudes than face-to-face communication.
  • The audience’s initial attitude, intelligence, and self-esteem have a significant role in the persuasion process. Research suggests that individuals with high intelligence are typically more resistant to persuasion efforts, and those with moderate self-esteem are easier to persuade than those with low or high self-esteem.
  • The effect of persuasive messages tends to fade over time, especially if delivered by a non-credible source. This suggests a need to reinforce even effective messages on a regular basis to maintain an effect.

I’ll cover the impact of each of these bullets on UX research and design presentations in the relevant sections below.

It’s important to note that while the Hovland-Yale model provides valuable insight into persuasive communication, it remains a simplification of a complex process. Actual attitude change and decision-making can be influenced by a multitude of other factors not covered in this model, like emotional states, group dynamics, and more, necessitating a multi-faceted approach to persuasion. However, the model provides a manageable framework to strengthen the communication of UX research findings, with a focus on elements that are within the control of the researcher and product team. I’ll break down the process of presenting findings to various audiences in the following section.

Let’s move into applying the models to our work as UX practitioners with a focus on how the model applies to how we prepare and present our findings to various stakeholders. You can reference the diagram above as needed as we move through the Independent variables.

Applying The Hovland-Yale Model To Presenting Your UX Research Findings

Let’s break down the key parts of the Hovland-Yale model and see how we can use them when presenting our UX research and design ideas.

Source

Revised: The Hovland-Yale model stresses that where a message comes from greatly affects how believable and effective it is. Research shows that a convincing source needs to be seen as dependable, informed, and trustworthy. In UX research, this source is usually the researcher(s) and other UX team members who present findings, suggest actions, lead workshops, and share design ideas. It’s crucial for the UX team to build trust with their audience, which often includes users, stakeholders, and designers.

You can demonstrate and strengthen your credibility throughout the research process and once again when presenting your findings.

How Can You Make Yourself More Credible?

You should start building your expertise and credibility before you even finish your research. Often, stakeholders will have already formed an opinion about your work before you even walk into the room. Here are a couple of ways to boost your reputation before or at the beginning of a project:

Case Studies

A well-written case study about your past work can be a great way to show stakeholders the benefits of user-centered design. Make sure your case studies match what your stakeholders care about. Don’t just tell an interesting story; tell a story that matters to them. Understand their priorities and tailor your case study to show how your UX work has helped achieve goals like higher ROI, happier customers, or lower turnover. Share these case studies as a document before the project starts so stakeholders can review them and get a positive impression of your work.

Thought Leadership

Sharing insights and expertise that your UX team has developed is another way to build credibility. This kind of “thought leadership” can establish your team as the experts in your field. It can take many forms, like blog posts, articles in industry publications, white papers, presentations, podcasts, or videos. You can share this content on your website, social media, or directly with stakeholders.

For example, if you’re about to start a project on gathering customer feedback, share any relevant articles or guides your team has created with your stakeholders before the project kickoff. If you are about to start developing a voice of the customer program and you happen to have Victor or Dana on your team, share their article on creating a VoC to your group of stakeholders prior to the kickoff meeting. [Shameless self-promotion and a big smile emoji].

You can also build credibility and trust while discussing your research and design, both during the project and when you present your final results.

Business Goals Alignment

To really connect with stakeholders, make sure your UX goals and the company’s business goals work together. Always tie your research findings and design ideas back to the bigger picture. This means showing how your work can affect things like customer happiness, more sales, lower costs, or other important business measures. You can even work with stakeholders to figure out which measures matter most to them. When you present your designs, point out how they’ll help the company reach its goals through good UX.

Industry Benchmarks

These days, it’s easier to find data on how other companies in your industry are doing. Use this to your advantage! Compare your findings to these benchmarks or even to your competitors. This can help stakeholders feel more confident in your work. Show them how your research fits in with industry trends or how it uncovers new ways to stand out. When you talk about your designs, highlight how you’ve used industry best practices or made changes based on what you’ve learned from users.

Methodological Transparency

Be open and honest about how you did your research. This shows you know what you’re doing and that you can be trusted. For example, if you were looking into why fewer people are renewing their subscriptions to a fitness app, explain how you planned your research, who you talked to, how you analyzed the data, and any challenges you faced. This transparency helps people accept your research results and builds trust.

Increasing Credibility Through Design Concepts

Here are some specific ways to make your design concepts more believable and trustworthy to stakeholders:

Ground Yourself in Research. You’ve done the research, so use it! Make sure your design decisions are based on your findings and user data. When you present, highlight the data that supports your choices.

Go Beyond Mockups. It’s helpful for stakeholders to see your designs in action. Static mockups are a good start, but try creating interactive prototypes that show how users will move through and use your design. This is especially important if you’re creating something new that stakeholders might have trouble visualizing.

User Quotes and Testimonials. Include quotes or stories from users in your presentation. This makes the process more personal and shows that you’re focused on user needs. You can use these quotes to explain specific design choices.

Before & After Impact. Use visuals or user journey maps to show how your design solution improves the user experience. If you’ve mapped out the current user journey or documented existing problems, show how your new design fixes those problems. Don’t leave stakeholders guessing about your design choices. Briefly explain why you made key decisions and how they help users or achieve business goals. You should have research and stakeholder input to back up your decisions.

Show Your Process. When presenting a more developed concept, show the work that led up to it. Don’t just share the final product. Include early sketches, wireframes, or simple prototypes to show how the design evolved and the reasoning behind your choices. This is especially helpful for executives or stakeholders who haven’t been involved in the whole process.

Be Open to Feedback and Iteration. Work together with stakeholders. Show that you’re open to their feedback and explain how their input can help you improve your designs.

Much of what I’ve covered above are also general best practices for presenting. Remember, these are just suggestions. You don’t have to use every single one to make your presentations more persuasive. Try different things, see what works best for you and your stakeholders, and have fun with it! The goal is to build trust and credibility with your UX team.

Message

The Hovland-Yale model, along with most other communication models, suggests that what you communicate is just as important as how you communicate it. In UX research, your message is usually your insights, data analysis, findings, and recommendations.

I’ve touched on this in the previous section because it’s hard to separate the source (who’s talking) from the message (what they’re saying). For example, building trust involves being transparent about your research methods, which is part of your message. So, some of what I’m about to say might sound familiar.

For this article, let’s define the message as your research findings and everything that goes with them (e.g., what you say in your presentation, the slides you use, other media), as well as your design concepts (how you show your design solutions, including drawings, wireframes, prototypes, and so on).

The Hovland-Yale model says it’s important to make your message easy to understand, relevant, and impactful. For example, instead of just saying,

“30% of users found the signup process difficult.”

you could say,

“30% of users struggled to sign up because the process was too complicated. This could lead to fewer renewals. Making the signup process easier could increase renewals and improve the overall experience.”

Storytelling is also a powerful way to get your message across. Weaving your findings into a narrative helps people connect with your data on a human level and remember your key points. Using real quotes or stories from users makes your presentation even more compelling.

Here are some other tips for delivering a persuasive message:

  • Practice Makes Perfect
    Rehearse your presentation. This will help you smooth out any rough spots, anticipate questions, and feel more confident.
  • Anticipate Concerns
    Think about any objections stakeholders might have and be ready to address them with data.
  • Welcome Feedback
    Encourage open discussion during your presentation. Listen to what stakeholders have to say and show that you’re willing to adapt your recommendations based on their concerns. This builds trust and makes everyone feel like they’re part of the process.
  • Follow Through is Key
    After your presentation, send a clear summary of the main points and action items. This shows you’re professional and makes it easy for stakeholders to refer back to your findings.

When presenting design concepts, it’s important to tell, not just show, what you’re proposing. Stakeholders might not have a deep understanding of UX, so just showing them screenshots might not be enough. Use user stories to walk them through the redesigned experience. This helps them understand how users will interact with your design and what benefits it will bring. Static screens show the “what,” but user stories reveal the “why” and “how.” By focusing on the user journey, you can demonstrate how your design solves problems and improves the overall experience.

For example, if you’re suggesting changes to the search bar and adding tooltips, you could say:

“Imagine a user lands on the homepage and sees the new, larger search bar. They enter their search term and get results. If they see an unfamiliar tool or a new action, they can hover over it to see a brief description.”

Here are some other ways to make your design concepts clearer and more persuasive:

  • Clear Design Language
    Use a consistent and visually appealing design language in your mockups and prototypes. This shows professionalism and attention to detail.
  • Accessibility Best Practices
    Make sure your design is accessible to everyone. This shows that you care about inclusivity and user-centered design.

One final note on the message is that research has found the likelihood of an audience’s attitude change is also dependent on the channel of communication. Mass media is found to be less effective in changing attitudes than face-to-face communication. Distributed teams and remote employees can employ several strategies to compensate for any potential impact reduction of asynchronous communication:

  • Interactive Elements
    Incorporate interactive elements into presentations, such as polls, quizzes, or clickable prototypes. This can increase engagement and make the experience more dynamic for remote viewers.
  • Video Summaries
    Create short video summaries of key findings and recommendations. This adds a personal touch and can help convey nuances that might be lost in text or static slides.
  • Virtual Q&A Sessions
    Schedule dedicated virtual Q&A sessions where stakeholders can ask questions and engage in discussions. This allows for real-time interaction and clarification, mimicking the benefits of face-to-face communication.
  • Follow-up Communication
    Actively follow up with stakeholders after they’ve reviewed the materials. Offer to discuss the content, answer questions, and gather feedback. This demonstrates a commitment to communication and can help solidify key takeaways.

Framing Your Message for Maximum Impact

The way you frame an issue can greatly influence how stakeholders see it. Framing is a persuasion technique that can help your message resonate more deeply with specific stakeholders. Essentially, you want to frame your message in a way that aligns with your stakeholders’ attitudes and values and presents your solution as the next logical step. There are many resources on how to frame messages, as this technique has been used often in public safety and public health research to encourage behavior change. This article discusses applying framing techniques for digital design.

You can also frame issues in a way that motivates your stakeholders. For example, instead of calling usability issues “problems,” I like to call them “opportunities.” This emphasizes the potential for improvement. Let’s say your research on a hospital website finds that the appointment booking process is confusing. You could frame this as an opportunity to improve patient satisfaction and maybe even reduce call center volume by creating a simpler online booking system. This way, your solution is a win-win for both patients and the hospital. Highlighting the positive outcomes of your proposed changes and using language that focuses on business benefits and user satisfaction can make a big difference.

Audience

Understanding your audience’s goals is essential before embarking on any research or design project. It serves as the foundation for tailoring content, supporting decision-making processes, ensuring clarity and focus, enhancing communication effectiveness, and establishing metrics for evaluation.

One specific aspect to consider is securing buy-in from the product and delivery teams prior to beginning any research or design. Without their investment in the outcomes and input on the process, it can be challenging to find stakeholders who see value in a project you created in a vacuum. Engaging with these teams early on helps align expectations, foster collaboration, and ensure that the research and design efforts are informed by the organization’s objectives.

Once you’ve identified your key stakeholders and secured buy-in, you should then Map the Decision-Making Process or understand the decision-making process your audience goes through, including the pain points, considerations, and influencing factors.

  • How are decisions made, and who makes them?
  • Is it group consensus?
  • Are there key voices that overrule all others?
  • Is there even a decision to be made in regard to the work you will do?

Understanding the decision-making process will enable you to provide the necessary information and support at each stage.

Finally, prior to engaging in any work, set clear objectives with your key stakeholders. Your UX team needs to collaborate with the product and delivery teams to establish clear objectives for the research or design project. These objectives should align with the organization’s goals and the audience’s needs.

By understanding your audience’s goals and involving the product and delivery teams from the outset, you can create research and design outcomes that are relevant, impactful, and aligned with the organization’s objectives.

As the source of your message, it’s your job to understand who you’re talking to and how they see the issue. Different stakeholders have different interests, goals, and levels of knowledge. It’s important to tailor your communication to each of these perspectives. Adjust your language, what you emphasize, and the complexity of your message to suit your audience. Technical jargon might be fine for technical stakeholders, but it could alienate those without a technical background.

Audience Characteristics: Know Your Stakeholders

Remember, your audience’s existing opinions, intelligence, and self-esteem play a big role in how persuasive you can be. Research suggests that people with higher intelligence tend to be more resistant to persuasion, while those with moderate self-esteem are easier to persuade than those with very low or very high self-esteem. Understanding your audience is key to giving a persuasive presentation of your UX research and design concepts. Tailoring your communication to address the specific concerns and interests of your stakeholders can significantly increase the impact of your findings.

To truly know your audience, you need information about who you’ll be presenting to, and the more you know, the better. At the very least, you should identify the different groups of stakeholders in your audience. This could include designers, developers, product managers, and executives. If possible, try to learn more about your key stakeholders. You could interview them at the beginning of your process, or you could give them a short survey to gauge their attitudes and behaviors toward the area your UX team is exploring.

Then, your UX team needs to decide the following:

  • How can you best keep all stakeholders engaged and informed as the project unfolds?
  • How will your presentation or concepts appeal to different interests and roles?
  • How can you best encourage discussion and decision-making with the different stakeholders present?
  • Should you hold separate presentations because of the wide range of stakeholders you need to share your findings with?
  • How will you prioritize information?

Your answers to the previous questions will help you focus on what matters most to each stakeholder group. For example, designers might be more interested in usability issues, while executives might care more about the business impact. If you’re presenting to a mixed audience, include a mix of information and be ready to highlight what’s relevant to each group in a way that grabs their attention. Adapt your communication style to match each group’s preferences. Provide technical details for developers and emphasize user experience benefits for executives.

Example

Let’s say you did UX research for a mobile banking app, and your audience includes designers, developers, and product managers.

Designers:

  • Focus on: Design-related findings like what users prefer in the interface, navigation problems, and suggestions for the visual design.
  • How to communicate: Use visuals like heatmaps and user journey maps to show design challenges. Talk about how fixing these issues can make the overall user experience better.

Developers:

  • Focus on: Technical stuff, like performance problems, bugs, or challenges with building the app.
  • How to communicate: Share code snippets or technical details about the problems you found. Discuss possible solutions that the developers can actually build. Be realistic about how much work it will take and be ready to talk about a “minimum viable product” (MVP).

Product Managers:

  • Focus on: Findings that affect how users engage with the app, how long they keep using it, and the overall business goals.
  • How to communicate: Use numbers and data to show how UX improvements can help the business. Explain how the research and your ideas fit into the product roadmap and long-term strategy.

By tailoring your presentation to each group, you make sure your message really hits home. This makes it more likely that they’ll support your UX research findings and work together to make decisions.

The Effect (Impact)

The end goal of presenting your findings and design concepts is to get key stakeholders to take action based on what you learned from users. Make sure the impact of your research is crystal clear. Talk about how your findings relate to business goals, customer happiness, and market success (if those are relevant to your product). Suggest clear, actionable next steps in the form of design concepts and encourage feedback and collaboration from stakeholders. This builds excitement and gets people invested. Make sure to answer any questions and ask for more feedback to show that you value their input. Remember, stakeholders play a big role in the product’s future, so getting them involved increases the value of your research.

The Call to Action (CTA)

Your audience needs to know what you want them to do. End your presentation with a strong call to action (CTA). But to do this well, you need to be clear on what you want them to do and understand any limitations they might have.

For example, if you’re presenting to the CEO, tailor your CTA to their priorities. Focus on the return on investment (ROI) of user-centered design. Show how your recommendations can increase sales, improve customer satisfaction, or give the company a competitive edge. Use clear visuals and explain how user needs translate into business benefits. End with a strong, action-oriented statement, like

“Let’s set up a meeting to discuss how we can implement these user-centered design recommendations to reach your strategic goals.”

If you’re presenting to product managers and business unit leaders, focus on the business goals they care about, like increasing revenue or reducing customer churn. Explain your research findings in terms of ROI. For example, a strong CTA could be:

“Let’s try out the redesigned checkout process and aim for a 10% increase in conversion rates next quarter.”

Remember, the effects of persuasive messages can fade over time, especially if the source isn’t seen as credible. This means you need to keep reinforcing your message to maintain its impact.

Understanding Limitations and Addressing Concerns

Persuasion is about guiding understanding, not tricking people. Be upfront about any limitations your audience might have, like budget constraints or limited development resources. Anticipate their concerns and address them in your CTA. For example, you could say,

“I know implementing the entire redesign might need more resources, so let’s prioritize the high-impact changes we found in our research to improve the checkout process within our current budget.”

By considering both your desired outcome and your audience’s perspective, you can create a clear, compelling, and actionable CTA that resonates with stakeholders and drives user-centered design decisions.

Finally, remember that presenting your research findings and design concepts isn’t the end of the road. The effects of persuasive messages can fade over time. Your team should keep looking for ways to reinforce key messages and decisions as you move forward with implementing solutions. Keep your presentations and concepts in a shared folder, remind people of the reasoning behind decisions, and be flexible if there are multiple ways to achieve the desired outcome. Showing how you’ve addressed stakeholder goals and concerns in your solution will go a long way in maintaining credibility and trust for future projects.

A Tool to Track Your Alignment to the Hovland-Yale Model

You and your UX team are likely already incorporating elements of persuasion into your work. It might be helpful to track how you are doing this to reflect on what works, what doesn’t, and where there are gaps. I’ve provided a spreadsheet in Figure 3 below for you to modify and use as you might see fit. I’ve included sample data to provide an example of what type of information you might want to record. You can set up the structure of a spreadsheet like this as you think about kicking off your next project, or you can fill it in with information from a recently completed project and reflect on what you can incorporate more in the future.

Please use the spreadsheet below as a suggestion and make additions, deletions, or changes as best suited to meet your needs. You don’t need to be dogmatic in adhering to what I’ve covered here. Experiment, find what works best for you, and have fun.

Project Phase Persuasion Element Topic Description Example Notes/
Reflection
Pre-Presentation Audience Stakeholder Group Identify the specific audience segment (e.g., executives, product managers, marketing team) Executives
Message Message Objectives What specific goals do you aim to achieve with each group? (e.g., garner funding, secure buy-in for specific features) Secure funding for continued app redesign
Source Source Credibility How will you establish your expertise and trustworthiness to each group? (e.g., past projects, relevant data) Highlighted successful previous UX research projects & strong user data analysis skills
Message Message Clarity & Relevance Tailor your presentation language and content to resonate with each audience’s interests and knowledge level Presented a concise summary of key findings with a focus on potential ROI and revenue growth for executives
Presentation & Feedback Source Attention Techniques How did you grab each group’s interest? (e.g., visuals, personal anecdotes, surprising data) Opened presentation with a dramatic statistic about mobile banking app usage
Message Comprehension Strategies Did you ensure understanding of key information? (e.g., analogies, visuals, Q&A) Used relatable real-world examples and interactive charts to explain user research findings
Message Emotional Appeals Did you evoke relevant emotions to motivate action? (e.g., fear of missing out, excitement for potential) Highlighted potential revenue growth and improved customer satisfaction with app redesign
Message Retention & Application What steps did you take to solidify key takeaways and encourage action? (e.g., clear call to action, follow-up materials) Ended with a concise call to action for funding approval and provided detailed research reports for further reference
Audience Stakeholder Feedback Record their reactions, questions, and feedback during and after the presentation Executives impressed with user insights, product managers requested specific data breakdowns
Analysis & Reflection Effect Effective Strategies & Outcomes Identify techniques that worked well and their impact on each group Executives responded well to the emphasis on business impact, leading to conditional funding approval
Feedback Improvements for Future Presentations Note areas for improvement in tailoring messages and engaging each stakeholder group Consider incorporating more interactive elements for product managers and diversifying data visualizations for wider appeal
Analysis Quantitative Metrics Track changes in stakeholder attitudes Conducted a follow-up survey to measure stakeholder agreement with design recommendations before and after the presentation Assess effectiveness of the presentation

Figure 3: Example of spreadsheet categories to track the application of the Hovland-Yale model to your presentation of UX Research findings.

References

Foundational Works

  • Hovland, C. I., Janis, I. L., & Kelley, H. H. (1953). Communication and persuasion. New Haven, CT: Yale University Press. (The cornerstone text on the Hovland-Yale model).
  • Weiner, B. J., & Hovland, C. I. (1956). Participating vs. nonparticipating persuasive presentations: A further study of the effects of audience participation. Journal of Abnormal and Social Psychology, 52(2), 105-110. (Examines the impact of audience participation in persuasive communication).
  • Kelley, H. H., & Hovland, C. I. (1958). The communication of persuasive content. Psychological Review, 65(4), 314-320. (Delves into the communication of persuasive messages and their effects).

Contemporary Applications

  • Pfau, M., & Dalton, M. J. (2008). The persuasive effects of fear appeals and positive emotion appeals on risky sexual behavior intentions. Journal of Communication, 58(2), 244-265. (Applies the Hovland-Yale model to study the effectiveness of fear appeals).
  • Chen, G., & Sun, J. (2010). The effects of source credibility and message framing on consumer online health information seeking. Journal of Interactive Advertising, 10(2), 75-88. (Analyzes the impact of source credibility and message framing, concepts within the model, on health information seeking).
  • Hornik, R., & McHale, J. L. (2009). The persuasive effects of emotional appeals: A meta-analysis of research on advertising emotions and consumer behavior. Journal of Consumer Psychology, 19(3), 394-403. (Analyzes the role of emotions in persuasion, a key aspect of the model, in advertising).
]]>
hello@smashingmagazine.com (Victor Yocco)
<![CDATA[Scaling Success: Key Insights And Practical Takeaways]]> https://smashingmagazine.com/2024/06/scaling-success-key-insights-pratical-takeaways/ https://smashingmagazine.com/2024/06/scaling-success-key-insights-pratical-takeaways/ Tue, 04 Jun 2024 12:00:00 GMT Building successful web products at scale is a multifaceted challenge that demands a combination of technical expertise, strategic decision-making, and a growth-oriented mindset. In Success at Scale, I dive into case studies from some of the web’s most renowned products, uncovering the strategies and philosophies that propelled them to the forefront of their industries.

Here you will find some of the insights I’ve gleaned from these success stories, part of an ongoing effort to build a roadmap for teams striving to achieve scalable success in the ever-evolving digital landscape.

Cultivating A Mindset For Scaling Success

The foundation of scaling success lies in fostering the right mindset within your team. The case studies in Success at Scale highlight several critical mindsets that permeate the culture of successful organizations.

User-Centricity

Successful teams prioritize the user experience above all else.

They invest in understanding their users’ needs, behaviors, and pain points and relentlessly strive to deliver value. Instagram’s performance optimization journey exemplifies this mindset, focusing on improving perceived speed and reducing user frustration, leading to significant gains in engagement and retention.

By placing the user at the center of every decision, Instagram was able to identify and prioritize the most impactful optimizations, such as preloading critical resources and leveraging adaptive loading strategies. This user-centric approach allowed them to deliver a seamless and delightful experience to their vast user base, even as their platform grew in complexity.

Data-Driven Decision Making

Scaling success relies on data, not assumptions.

Teams must embrace a data-driven approach, leveraging metrics and analytics to guide their decisions and measure impact. Shopify’s UI performance improvements showcase the power of data-driven optimization, using detailed profiling and user data to prioritize efforts and drive meaningful results.

By analyzing user interactions, identifying performance bottlenecks, and continuously monitoring key metrics, Shopify was able to make informed decisions that directly improved the user experience. This data-driven mindset allowed them to allocate resources effectively, focusing on the areas that yielded the greatest impact on performance and user satisfaction.

Continuous Improvement

Scaling is an ongoing process, not a one-time achievement.

Successful teams foster a culture of continuous improvement, constantly seeking opportunities to optimize and refine their products. Smashing Magazine’s case study on enhancing Core Web Vitals demonstrates the impact of iterative enhancements, leading to significant performance gains and improved user satisfaction.

By regularly assessing their performance metrics, identifying areas for improvement, and implementing incremental optimizations, Smashing Magazine was able to continuously elevate the user experience. This mindset of continuous improvement ensures that the product remains fast, reliable, and responsive to user needs, even as it scales in complexity and user base.

Collaboration And Inclusivity

Silos hinder scalability.

High-performing teams promote collaboration and inclusivity, ensuring that diverse perspectives are valued and leveraged. The Understood’s accessibility journey highlights the power of cross-functional collaboration, with designers, developers, and accessibility experts working together to create inclusive experiences for all users.

By fostering open communication, knowledge sharing, and a shared commitment to accessibility, The Understood was able to embed inclusive design practices throughout its development process. This collaborative and inclusive approach not only resulted in a more accessible product but also cultivated a culture of empathy and user-centricity that permeated all aspects of their work.

Making Strategic Decisions for Scalability

Beyond cultivating the right mindset, scaling success requires making strategic decisions that lay the foundation for sustainable growth.

Technology Choices

Selecting the right technologies and frameworks can significantly impact scalability. Factors like performance, maintainability, and developer experience should be carefully considered. Notion’s migration to Next.js exemplifies the importance of choosing a technology stack that aligns with long-term scalability goals.

By adopting Next.js, Notion was able to leverage its performance optimizations, such as server-side rendering and efficient code splitting, to deliver fast and responsive pages. Additionally, the developer-friendly ecosystem of Next.js and its strong community support enabled Notion’s team to focus on building features and optimizing the user experience rather than grappling with low-level infrastructure concerns. This strategic technology choice laid the foundation for Notion’s scalable and maintainable architecture.

Ship Only The Code A User Needs, When They Need It

This best practice is so important when we want to ensure that pages load fast without over-eagerly delivering JavaScript a user may not need at that time. For example, Instagram made a concerted effort to improve the web performance of instagram.com, resulting in a nearly 50% cumulative improvement in feed page load time. A key area of focus has been shipping less JavaScript code to users, particularly on the critical rendering path.

The Instagram team found that the uncompressed size of JavaScript is more important for performance than the compressed size, as larger uncompressed bundles take more time to parse and execute on the client, especially on mobile devices. Two optimizations they implemented to reduce JS parse/execute time were inline requires (only executing code when it’s first used vs. eagerly on initial load) and serving ES2017+ code to modern browsers to avoid transpilation overhead. Inline requires improved Time-to-Interactive metrics by 12%, and the ES2017+ bundle was 5.7% smaller and 3% faster than the transpiled version.

While good progress has been made, the Instagram team acknowledges there are still many opportunities for further optimization. Potential areas to explore could include the following:

  • Improved code-splitting, moving more logic off the critical path,
  • Optimizing scrolling performance,
  • Adapting to varying network conditions,
  • Modularizing their Redux state management.

Continued efforts will be needed to keep instagram.com performing well as new features are added and the product grows in complexity.

Accessibility Integration

Accessibility should be an integral part of the product development process, not an afterthought.

Wix’s comprehensive approach to accessibility, encompassing keyboard navigation, screen reader support, and infrastructure for future development, showcases the importance of building inclusivity into the product’s core.

By considering accessibility requirements from the initial design stages and involving accessibility experts throughout the development process, Wix was able to create a platform that empowered its users to build accessible websites. This holistic approach to accessibility not only benefited end-users but also positioned Wix as a leader in inclusive web design, attracting a wider user base and fostering a culture of empathy and inclusivity within the organization.

Developer Experience Investment

Investing in a positive developer experience is essential for attracting and retaining talent, fostering productivity, and accelerating development.

Apideck’s case study in the book highlights the impact of a great developer experience on community building and product velocity.

By providing well-documented APIs, intuitive SDKs, and comprehensive developer resources, Apideck was able to cultivate a thriving developer community. This investment in developer experience not only made it easier for developers to integrate with Apideck’s platform but also fostered a sense of collaboration and knowledge sharing within the community. As a result, ApiDeck was able to accelerate product development, leverage community contributions, and continuously improve its offering based on developer feedback.

Leveraging Performance Optimization Techniques

Achieving optimal performance is a critical aspect of scaling success. The case studies in Success at Scale showcase various performance optimization techniques that have proven effective.

Progressive Enhancement and Graceful Degradation

Building resilient web experiences that perform well across a range of devices and network conditions requires a progressive enhancement approach. Pinafore’s case study in Success at Scale highlights the benefits of ensuring core functionality remains accessible even in low-bandwidth or JavaScript-constrained environments.

By leveraging server-side rendering and delivering a usable experience even when JavaScript fails to load, Pinafore demonstrates the importance of progressive enhancement. This approach not only improves performance and resilience but also ensures that the application remains accessible to a wider range of users, including those with older devices or limited connectivity. By gracefully degrading functionality in constrained environments, Pinafore provides a reliable and inclusive experience for all users.

Adaptive Loading Strategies

The book’s case study on Tinder highlights the power of sophisticated adaptive loading strategies. By dynamically adjusting the content and resources delivered based on the user’s device capabilities and network conditions, Tinder ensures a seamless experience across a wide range of devices and connectivity scenarios. Tinder’s adaptive loading approach involves techniques like dynamic code splitting, conditional resource loading, and real-time network quality detection. This allows the application to optimize the delivery of critical resources, prioritize essential content, and minimize the impact of poor network conditions on the user experience.

By adapting to the user’s context, Tinder delivers a fast and responsive experience, even in challenging environments.

Efficient Resource Management

Effective management of resources, such as images and third-party scripts, can significantly impact performance. eBay’s journey showcases the importance of optimizing image delivery, leveraging techniques like lazy loading and responsive images to reduce page weight and improve load times.

By implementing lazy loading, eBay ensures that images are only loaded when they are likely to be viewed by the user, reducing initial page load time and conserving bandwidth. Additionally, by serving appropriately sized images based on the user’s device and screen size, eBay minimizes the transfer of unnecessary data and improves the overall loading performance. These resource management optimizations, combined with other techniques like caching and CDN utilization, enable eBay to deliver a fast and efficient experience to its global user base.

Continuous Performance Monitoring

Regularly monitoring and analyzing performance metrics is crucial for identifying bottlenecks and opportunities for optimization. The case study on Yahoo! Japan News demonstrates the impact of continuous performance monitoring, using tools like Lighthouse and real user monitoring to identify and address performance issues proactively.

By establishing a performance monitoring infrastructure, Yahoo! Japan News gains visibility into the real-world performance experienced by their users. This data-driven approach allows them to identify performance regression, pinpoint specific areas for improvement, and measure the impact of their optimizations. Continuous monitoring also enables Yahoo! Japan News to set performance baselines, track progress over time, and ensure that performance remains a top priority as the application evolves.

Embracing Accessibility and Inclusive Design

Creating inclusive web experiences that cater to diverse user needs is not only an ethical imperative but also a critical factor in scaling success. The case studies in Success at Scale emphasize the importance of accessibility and inclusive design.

Comprehensive Accessibility Testing

Ensuring accessibility requires a combination of automated testing tools and manual evaluation. LinkedIn’s approach to automated accessibility testing demonstrates the value of integrating accessibility checks into the development workflow, catching potential issues early, and reducing the reliance on manual testing alone.

By leveraging tools like Deque’s axe and integrating accessibility tests into their continuous integration pipeline, LinkedIn can identify and address accessibility issues before they reach production. This proactive approach to accessibility testing not only improves the overall accessibility of the platform but also reduces the cost and effort associated with retroactive fixes. However, LinkedIn also recognizes the importance of manual testing and user feedback in uncovering complex accessibility issues that automated tools may miss. By combining automated checks with manual evaluation, LinkedIn ensures a comprehensive approach to accessibility testing.

Inclusive Design Practices

Designing with accessibility in mind from the outset leads to more inclusive and usable products. Success With Scale\’s case study on Intercom about creating an accessible messenger highlights the importance of considering diverse user needs, such as keyboard navigation and screen reader compatibility, throughout the design process.

By embracing inclusive design principles, Intercom ensures that their messenger is usable by a wide range of users, including those with visual, motor, or cognitive impairments. This involves considering factors such as color contrast, font legibility, focus management, and clear labeling of interactive elements. By designing with empathy and understanding the diverse needs of their users, Intercom creates a messenger experience that is intuitive, accessible, and inclusive. This approach not only benefits users with disabilities but also leads to a more user-friendly and resilient product overall.

User Research And Feedback

Engaging with users with disabilities and incorporating their feedback is essential for creating truly inclusive experiences. The Understood’s journey emphasizes the value of user research and collaboration with accessibility experts to identify and address accessibility barriers effectively.

By conducting usability studies with users who have diverse abilities and working closely with accessibility consultants, The Understood gains invaluable insights into the real-world challenges faced by their users. This user-centered approach allows them to identify pain points, gather feedback on proposed solutions, and iteratively improve the accessibility of their platform.

By involving users with disabilities throughout the design and development process, The Understood ensures that their products not only meet accessibility standards but also provide a meaningful and inclusive experience for all users.

Accessibility As A Shared Responsibility

Promoting accessibility as a shared responsibility across the organization fosters a culture of inclusivity. Shopify’s case study underscores the importance of educating and empowering teams to prioritize accessibility, recognizing it as a fundamental aspect of the user experience rather than a mere technical checkbox.

By providing accessibility training, guidelines, and resources to designers, developers, and content creators, Shopify ensures that accessibility is considered at every stage of the product development lifecycle. This shared responsibility approach helps to build accessibility into the core of Shopify’s products and fosters a culture of inclusivity and empathy. By making accessibility everyone’s responsibility, Shopify not only improves the usability of their platform but also sets an example for the wider industry on the importance of inclusive design.

Fostering A Culture of Collaboration And Knowledge Sharing

Scaling success requires a culture that promotes collaboration, knowledge sharing, and continuous learning. The case studies in Success at Scale highlight the impact of effective collaboration and knowledge management practices.

Cross-Functional Collaboration

Breaking down silos and fostering cross-functional collaboration accelerates problem-solving and innovation. Airbnb’s design system journey showcases the power of collaboration between design and engineering teams, leading to a cohesive and scalable design language across web and mobile platforms.

By establishing a shared language and a set of reusable components, Airbnb’s design system enables designers and developers to work together more efficiently. Regular collaboration sessions, such as design critiques and code reviews, help to align both teams and ensure that the design system evolves in a way that meets the needs of all stakeholders. This cross-functional approach not only improves the consistency and quality of the user experience but also accelerates the development process by reducing duplication of effort and promoting code reuse.

Knowledge Sharing And Documentation

Capturing and sharing knowledge across the organization is crucial for maintaining consistency and enabling the efficient onboarding of new team members. Stripe’s investment in internal frameworks and documentation exemplifies the value of creating a shared understanding and facilitating knowledge transfer.

By maintaining comprehensive documentation, code examples, and best practices, Stripe ensures that developers can quickly grasp the intricacies of their internal tools and frameworks. This documentation-driven culture not only reduces the learning curve for new hires but also promotes consistency and adherence to established patterns and practices. Regular knowledge-sharing sessions, such as tech talks and lunch-and-learns, further reinforce this culture of learning and collaboration, enabling team members to learn from each other’s experiences and stay up-to-date with the latest developments.

Communities Of Practice

Establishing communities of practice around specific domains, such as accessibility or performance, promotes knowledge sharing and continuous improvement. Shopify’s accessibility guild demonstrates the impact of creating a dedicated space for experts and advocates to collaborate, share best practices, and drive accessibility initiatives forward.

By bringing together individuals passionate about accessibility from across the organization, Shopify’s accessibility guild fosters a sense of community and collective ownership. Regular meetings, workshops, and hackathons provide opportunities for members to share their knowledge, discuss challenges, and collaborate on solutions. This community-driven approach not only accelerates the adoption of accessibility best practices but also helps to build a culture of inclusivity and empathy throughout the organization.

Leveraging Open Source And External Expertise

Collaborating with the wider developer community and leveraging open-source solutions can accelerate development and provide valuable insights. Pinafore’s journey highlights the benefits of engaging with accessibility experts and incorporating their feedback to create a more inclusive and accessible web experience.

By actively seeking input from the accessibility community and leveraging open-source accessibility tools and libraries, Pinafore was able to identify and address accessibility issues more effectively. This collaborative approach not only improved the accessibility of the application but also contributed back to the wider community by sharing their learnings and experiences. By embracing open-source collaboration and learning from external experts, teams can accelerate their own accessibility efforts and contribute to the collective knowledge of the industry.

The Path To Sustainable Success

Achieving scalable success in the web development landscape requires a multifaceted approach that encompasses the right mindset, strategic decision-making, and continuous learning. The Success at Scale book provides a comprehensive exploration of these elements, offering deep insights and practical guidance for teams at all stages of their scaling journey.

By cultivating a user-centric, data-driven, and inclusive mindset, teams can prioritize the needs of their users and make informed decisions that drive meaningful results. Adopting a culture of continuous improvement and collaboration ensures that teams are always striving to optimize and refine their products, leveraging the collective knowledge and expertise of their members.

Making strategic technology choices, such as selecting performance-oriented frameworks and investing in developer experience, lays the foundation for scalable and maintainable architectures. Implementing performance optimization techniques, such as adaptive loading, efficient resource management, and continuous monitoring, helps teams deliver fast and responsive experiences to their users.

Embracing accessibility and inclusive design practices not only ensures that products are usable by a wide range of users but also fosters a culture of empathy and user-centricity. By incorporating accessibility testing, inclusive design principles, and user feedback into the development process, teams can create products that are both technically sound and meaningfully inclusive.

Fostering a culture of collaboration, knowledge sharing, and continuous learning is essential for scaling success. By breaking down silos, promoting cross-functional collaboration, and investing in documentation and communities of practice, teams can accelerate problem-solving, drive innovation, and build a shared understanding of their products and practices.

The case studies featured in Success at Scale serve as powerful examples of how these principles and strategies can be applied in real-world contexts. By learning from the successes and challenges of industry leaders, teams can gain valuable insights and inspiration for their own scaling journeys.

As you embark on your path to scaling success, remember that it is an ongoing process of iteration, learning, and adaptation. Embrace the mindsets and strategies outlined in this article, dive deeper into the learnings from the Success at Scale book, and continually refine your approach based on the unique needs of your users and the evolving landscape of web development.

Conclusion

Scaling successful web products requires a holistic approach that combines technical excellence, strategic decision-making, and a growth-oriented mindset. By learning from the experiences of industry leaders, as showcased in the Success at Scale book, teams can gain valuable insights and practical guidance on their journey towards sustainable success.

Cultivating a user-centric, data-driven, and inclusive mindset lays the foundation for scalability. By prioritizing the needs of users, making informed decisions based on data, and fostering a culture of continuous improvement and collaboration, teams can create products that deliver meaningful value and drive long-term growth.

Making strategic decisions around technology choices, performance optimization, accessibility integration, and developer experience investment sets the stage for scalable and maintainable architectures. By leveraging proven optimization techniques, embracing inclusive design practices, and investing in the tools and processes that empower developers, teams can build products that are fast and resilient.

Through ongoing collaboration, knowledge sharing, and a commitment to learning, teams can navigate the complexities of scaling success and create products that make a lasting impact in the digital landscape.

We’re Trying Out Something New

In an effort to conserve resources here at Smashing, we’re trying something new with Success at Scale. The printed book is 304 pages, and we make an expanded PDF version available to everyone who purchases a print book. This accomplishes a few good things:

  • We will use less paper and materials because we are making a smaller printed book;
  • We’ll use fewer resources in general to print, ship, and store the books, leading to a smaller carbon footprint; and
  • Keeping the book at more manageable size means we can continue to offer free shipping on all Smashing orders!

Smashing Books have always been printed with materials from FSC Certified forests. We are committed to finding new ways to conserve resources while still bringing you the best possible reading experience.

Community Matters ❤️

Producing a book takes quite a bit of time, and we couldn’t pull it off without the support of our wonderful community. A huge shout-out to Smashing Members for the kind, ongoing support. The eBook is and always will be free for Smashing Members. Plus, Members get a friendly discount when purchasing their printed copy. Just sayin’! ;-)

More Smashing Books & Goodies

Promoting best practices and providing you with practical tips to master your daily coding and design challenges has always been (and will be) at the core of everything we do at Smashing.

In the past few years, we were very lucky to have worked together with some talented, caring people from the web community to publish their wealth of experience as printed books that stand the test of time. Heather and Steven are two of these people. Have you checked out their books already?

Understanding Privacy

Everything you need to know to put your users first and make a better web.

Get Print + eBook

Touch Design for Mobile Interfaces

Learn how touchscreen devices really work — and how people really use them.

Get Print + eBook

Interface Design Checklists

100 practical cards for common interface design challenges.

Get Print + eBook

]]>
hello@smashingmagazine.com (Addy Osmani)
<![CDATA[Ice Cream Ahead (June 2024 Wallpapers Edition)]]> https://smashingmagazine.com/2024/05/desktop-wallpaper-calendars-june-2024/ https://smashingmagazine.com/2024/05/desktop-wallpaper-calendars-june-2024/ Fri, 31 May 2024 16:00:00 GMT There’s an artist in everyone. Some bring their ideas to life with digital tools, others capture the perfect moment with a camera or love to grab pen and paper to create little doodles or pieces of lettering. And even if you think you’re far from being an artist, well, it might just be hidden deep inside of you. So why not explore it?

For more than 13 years already, our monthly wallpapers series has been the perfect opportunity to do just that: to break out of your daily routine and get fully immersed in a creative little project. This month was no exception, of course.

In this post, you’ll find beautiful, unique, and inspiring wallpapers designed by creative folks who took on the challenge. All of them are available in versions with and without a calendar for June 2024 and can be downloaded for free. As a little bonus goodie, we also added a selection of June favorites from our archives that are just too good to be forgotten. Thank you to everyone who shared their designs with us this month! Happy June!

  • You can click on every image to see a larger preview,
  • We respect and carefully consider the ideas and motivation behind each and every artist’s work. This is why we give all artists the full freedom to explore their creativity and express emotions and experience through their works. This is also why the themes of the wallpapers weren’t anyhow influenced by us but rather designed from scratch by the artists themselves.
  • Submit a wallpaper!
    Did you know that you could get featured in our next wallpapers post, too? We are always looking for creative talent.

Kitten

Designed by Design Studio from India.

Celebrate National Tropic Day

“Today, let’s dive into the lush wonders of the tropics! From breathtaking landscapes to tantalizing flavors, National Tropic Day is a celebration of all things tropical. Join us in honoring the vibrant cultures and biodiversity that make these regions so unique. Get ready to embark on a tropical adventure!” — Designed by PopArt Studio from Serbia.

All-Seeing Eye

Designed by Ricardo Gimenes from Sweden.

Grand Canyon In June

“June arrives, and with it, summer. The endless afternoons, the heat… and all those moments to enjoy and rest, because, even if we are working, in summer everything is seen with different eyes. This year, we are going to the Grand Canyon of Colorado to admire the beauty of its landscapes and enjoy its sunsets.” — Designed by Veronica Valenzuela Jimenez from Spain.

Splash

Designed by Ricardo Gimenes from Sweden.

What Is Green

“When June arrives, it’s the best moment of the year for the photosynthesis (in the north). Most of the plants are in green colors. I like to play with colors and computers. This picture comes out of my imagination. It’s like looking with an electronic microscope at a strange green structure.” — Designed by Philippe Brouard from France.

Back In My Days

Designed by Ricardo Gimenes from Sweden.

Create Your Own Path

“Nice weather has arrived! Clean the dust off your bike and explore your hometown from a different angle! Invite a friend or loved one and share the joy of cycling. Whether you decide to go for a city ride or a ride in nature, the time spent on a bicycle will make you feel free and happy. So don’t wait, take your bike and call your loved one because happiness is greater only when it is shared. Happy World Bike Day!” — Designed by PopArt Studio from Serbia.

Summer Coziness

“I’ve waited for this summer more than I waited for any other summer since I was a kid. I dream of watermelon, strawberries, and lots of colors.” — Designed by Kate Jameson from the United States.

Summer Surf

“Summer vibes…” — Designed by Antun Hirsman from Croatia.

Strawberry Fields

Designed by Nathalie Ouederni from France.

Deep Dive

“Summer rains, sunny days, and a whole month to enjoy. Dive deep inside your passions and let them guide you.” — Designed by Ana Masnikosa from Belgrade, Serbia.

Join The Wave

“The month of warmth and nice weather is finally here. We found inspiration in the World Oceans Day which occurs on June 8th and celebrates the wave of change worldwide. Join the wave and dive in!” — Designed by PopArt Studio from Serbia.

Summer Party

Designed by Ricardo Gimenes from Sweden.

Oh, The Places You Will Go!

“In celebration of high school and college graduates ready to make their way in the world!” — Designed by Bri Loesch from the United States.

Ice Creams Away!

“Summer is taking off with some magical ice cream hot air balloons.” — Designed by Sasha Endoh from Canada.

Merry-Go-Round

Designed by Xenia Latii from Germany.

Nine Lives

“I grew up with cats around (and drawing them all the time). They are so funny… one moment they are being funny, the next they are reserved. If you have place in your life for a pet, adopt one today!” — Designed by Karen Frolo from the United States.

Solstice Sunset

“June 21 marks the longest day of the year for the Northern Hemisphere — and sunsets like these will be getting earlier and earlier after that!” — Designed by James Mitchell from the United Kingdom.

Travel Time

“June is our favorite time of the year because the keenly anticipated sunny weather inspires us to travel. Stuck at the airport, waiting for our flight but still excited about wayfaring, we often start dreaming about the new places we are going to visit. Where will you travel to this summer? Wherever you go, we wish you a pleasant journey!” — Designed by PopArt Studio from Serbia.

Bauhaus

“I created a screenprint of one of the most famous buildings from the Bauhaus architect Mies van der Rohe for you. So, enjoy the Barcelona Pavillon for your June wallpaper.” — Designed by Anne Korfmacher from Germany.

Pineapple Summer Pop

“I love creating fun and feminine illustrations and designs. I was inspired by juicy tropical pineapples to celebrate the start of summer.” — Designed by Brooke Glaser from Honolulu, Hawaii.

Melting Away

Designed by Ricardo Gimenes from Sweden.

The Kids Looking Outside

“These are my cats looking out of the window. Because it is Children’s Day in June in a lot of countries, I chose to make a wallpaper with this photo of my cats. The cats are like kids, they always want to play and love to be outside! Also, most kids love cats!” — Designed by Kevin van Geloven from the Netherlands.

Expand Your Horizons

“It’s summer! Go out, explore, expand your horizons!” — Designed by Dorvan Davoudi from Canada.

Ice Cream June

“For me, June always marks the beginning of summer! The best way to celebrate summer is of course ice cream, what else?” — Designed by Tatiana Anagnostaki from Greece.

Sunset In Jamaica

“Photo from a recent trip to Jamaica edited to give a retro look and feel.” — Designed by Tommy Digiovanni from the United States.

Flamingood Vibes Only

“I love flamingos! They give me a happy feeling that I want to share with the world.” — Designed by Melissa Bogemans from Belgium.

Knitting For Summer

“I made multiple circles overlapping with close distances. The overall drawing looks like a clothing texture, for something you could wear in coming summer. Let’s have a nice summer.” — Designed by Philippe Brouard from France.

]]>
hello@smashingmagazine.com (Cosima Mielke)
<![CDATA[In Praise Of The Basics]]> https://smashingmagazine.com/2024/05/in-praise-of-the-basics/ https://smashingmagazine.com/2024/05/in-praise-of-the-basics/ Thu, 30 May 2024 15:00:00 GMT Lately, I’ve been thinking about the basics of web development. Actually, I’ve been thinking about them for some time now, at least since I started teaching beginning web development in 2020.

I’m fascinated by the basics. They’re an unsung hero, really, as there is no developer worth their salt who would be where they are without them. Yet, they often go unnoticed.

The basics exist in some sort of tension between the utmost importance and the incredibly banal.

You might even think of them as the vegetable side on your dinner plate — wholesome but perhaps bland without the right seasoning.

Who needs the basics of HTML and CSS, some say, when we have tools that abstract the way they’re written and managed? We now have site builders that require no technical knowledge. We have frameworks with enough syntactic sugar to give your development chops a case of cavities. We have libraries packed with any number of pre-established patterns that can be copy-pasted without breaking a sweat. The need to “learn” the basics of HTML and CSS is effectively null when the number of tools that exist to supplant them is enough to fill a small galaxy of stars.

Rachel Andrew wrote one of my all-time favorite posts back in 2019, equating the rise of abstractions with an increase in complexity and a profound loss of inroads for others to enter the web development field:

“We have already lost many of the entry points that we had. We don’t have the forums of parents teaching each other HTML and CSS, in order to make a family album. Those people now use Facebook or perhaps run a blog on wordpress.com or SquareSpace with a standard template. We don’t have people customising their MySpace profile or learning HTML via Neopets. We don’t have the people, usually women, entering the industry because they needed to learn HTML during that period when an organisation’s website was deemed part of the duties of the administrator.”

— Rachel Andrew, “HTML, CSS and our vanishing industry entry points

There’s no moment more profound in my web development career than the time I changed the background color of a page from default white to some color value I can’t remember (but know for a fact it would never be dodgerblue). That, and my personal “a-ha!” moment when realizing that everything in CSS is a box. Nothing guided me with the exception of “View Source,” and I’d bet the melting Chapstick in my pocket that you’re the same if you came up around the turn of the 21st century.

Where do you go to learn HTML and CSS these days? Even now, there are few dedicated secondary education programs (or scholarships, for that matter) to consider. We didn’t have bootcamps back in the day, but you don’t have to toss a virtual stone across many pixels to find one today.

There are excellent and/or free tutorials, too. Here, I’ll link a few of ’em up for you:

Let’s not even get into the number of YouTube tutorials. But if you do, no one beats Kevin’s incredible archive of recorded gems.

Anyway, my point is that there are more resources than ever for learning web development, but still painfully few entry points to get there. The resources we have for learning the basics are great, but many are either growing stale, are quick hits without a clear learning path, or assume the learner has at least some technical knowledge. I can tell you, as someone who has hit the Publish button on thousands of front-end tutorials, that the vast majority — if not all — of them are geared toward those who are already on the career path.

It was always a bit painful when someone would email CSS-Tricks asking where to get started learning CSS because, well, you’d imagine CSS-Tricks being the perfect home for something like that, and yet, there’s nothing. It’s just the reality, even if many of us (myself included) cut our chops with sites like CSS-Tricks, Smashing Magazine, and A List Apart. We were all learning together at that time, or so it seemed.

What we need are more pathways for deep learning.

Learning Experience Design (LXD) is a real thing that I’d position somewhere between what we know as UX Design and the practice of accessibility. There’s a focus on creating delightful experiences, sure, but the real aim of LDX is to establish learning paths that universally account for different types of learners (e.g., adults and children) and learning styles (e.g., visual and experiential). According to LDX, learners have a set of needs not totally unlike those that Maslow’s hierarchy of needs identifies for all humans, and there are different models for determining those needs, perhaps none more influential than Bloom’s Taxonomy.

These are things that many front-end tutorials, bootcamps, videos, and programs are not designed for. It’s not that the resources are bad (nay, most are excellent); it’s that they are serving different learners and learning types than what a day-one beginner needs. And let’s please not rely on AI to fill the gaps in human experiences!

Like I said, I’ve been thinking about this a lot. Like, a lot a lot. In fact, I recently published an online course purely dedicated to learning the basics of front-end development, creatively named TheBasics.dev. I’d like to think it’s not just another tutorial because it’s a complete set of lessons that includes reading, demonstrations, videos, lab exercises, and assessments, i.e., a myriad of ways to learn. I’d also like to think that this is more than just another bootcamp because it is curricula designed with the intention to develop new knowledge through reflective practices, peer learning, and feedback.

Anyway, I’m darn proud of The Basics, even if I’m not exactly the self-promoting type, and writing about it is outside of my comfort zone. If you’re reading this, it’s very likely that you, too, work on the front end. The Basics isn’t for you exactly, though I’d argue that brushing up on fundamentals is never a bad thing, regardless of your profession, but especially in front-end development, where standards are well-documented but ever-changing as well.

The Basics is more for your clients who do not know how to update the website they paid you to make. Or the friend who’s learning but still keeps bugging you with questions about the things they’re reading. Or your mom, who still has no idea what it is you do for a living. It’s for those whom the entry points are vanishing. It’s for those who could simply sign up for a Squarespace account but want to actually understand the code it spits out so they have more control to make a site that uniquely reflects them.

If you know a person like that, I would love it if you’d share The Basics with them.

Long live the basics! Long live the “a-ha!” moments that help us all fall in love with the World Wide Web.

]]>
hello@smashingmagazine.com (Geoff Graham)
<![CDATA[Decision Trees For UI Components]]> https://smashingmagazine.com/2024/05/decision-trees-ui-components/ https://smashingmagazine.com/2024/05/decision-trees-ui-components/ Wed, 29 May 2024 18:00:00 GMT How do you know what UI component to choose? Decision trees offer a systematic approach for design teams to document their design decisions. Once we’ve decided what UI components we use and when, we can avoid never-ending discussions, confusion, and misunderstanding.

Let’s explore a few examples of decision trees for UI components and how we can get the most out of them.

This article is part of our ongoing series on design patterns. It’s also an upcoming part of the 10h-video library on Smart Interface Design Patterns 🍣 and the upcoming live UX training as well. Use code BIRDIE to save 15% off.

B2B Navigation and Help Components: Doctolib

Doctolib Design System is a very impressive design system with decision trees, B2B navigation paths, photography, PIN input, UX writing, and SMS notifications — and thorough guides on how to choose UI components.

I love how practical these decision trees are. Each shows an example of what a component looks like, but I would also add references to real-life UI examples and flows of where and how these components are used. A fantastic starting point that documents design decisions better than any guide would.

Decision Trees For UI Components: Workday

The team behind Workday’s Canvas design system created a fantastic set of design decision trees for notifications, errors and alerts, loading patterns, calls to action, truncation, and overflow — with guidelines, examples, and use cases, which can now only be retrieved from the archive:

For each decision tree, the Workday team has put together a few context-related questions to consider first when making a decision before even jumping into the decision tree. Plus, there are thorough examples for each option available, as well as a very detailed alternative text for every image.

Form Components Decision Tree: Lyft

A choice of a form component can often be daunting. When should you use radio buttons, checkboxes, or dropdowns? Runi Goswami from Lyft has shared a detailed form components decision tree that helps their team choose between form controls.

We start by exploring whether a user can select more than one option in our UI. If it’s indeed multi-select, we use toggles for short options and checkboxes for longer ones.

If only one option can be selected, then we use tabs for filtering, radios for shorter options, a switch for immediately applicable options, and a checkbox if only one option can be selected. Dropdowns are used as a last resort.

Choosing Onboarding Components: NewsKit

Onboarding comes in various forms and shapes. Depending on how subtle or prominent we want to highlight a particular feature, we can use popovers, badges, hints, flags, toasts, feature cards, or design a better empty state. The Newskit team has put together an Onboarding Selection Prototype in Figma.

The choice depends on whether we want to interrupt the users to display details (usually isn’t very effective), show a feature subtly during the experience (more effective), or enable discovery by highlighting a feature within the context of a task a user tries to accomplish.

The toolkit asks a designer a couple of questions about the intent of onboarding, and then suggests options that are likely to perform best — a fantastic little helper for streamlined onboarding decisions.

Design System Process Flowcharts: Nucleus

How do you decide to add a new component to a design system or rather extend an existing one? What’s the process for contributions, maintenance, and the overall design process? Some design teams codify their design decisions as design system process flowcharts, as shown below.

And here are helpful decision trees for adding new components to a design system:

Make Decision Trees Visible

What I absolutely love about the decision tree approach is not only that it beautifully visualizes design decisions but that it also serves as a documentation. It establishes shared standards across teams and includes examples to follow, with incredible value for new hires.

Of course, exceptions happen. But once you have codified the ways of working for design teams as a decision tree and made it front and center of your design work, it resolves never-ending discussions about UI decisions for good.

So whenever a debate comes up, document your decisions in a decision tree. Turn them into posters. Place them in kitchen areas and developer’s and QA workspaces. Put them in design critique rooms. Make them visible where design work happens and where code is being written.

It’s worth mentioning that every project will need its own custom trees, so please see the examples above as an idea to build upon and customize away for your needs.

Meet Smart Interface Design Patterns

If you are interested in similar insights around UX, take a look at Smart Interface Design Patterns, our 10h-video course with 100s of practical examples from real-life projects — with a live UX training later this year. Everything from mega-dropdowns to complex enterprise tables — with 5 new segments added every year. Jump to a free preview.

Meet Smart Interface Design Patterns, our video course on interface design & UX.

100 design patterns & real-life examples.
10h-video course + live UX training. Free preview.

]]>
hello@smashingmagazine.com (Vitaly Friedman)
<![CDATA[The Era Of Platform Primitives Is Finally Here]]> https://smashingmagazine.com/2024/05/netlify-platform-primitives/ https://smashingmagazine.com/2024/05/netlify-platform-primitives/ Tue, 28 May 2024 12:00:00 GMT This article is a sponsored by Netlify

In the past, the web ecosystem moved at a very slow pace. Developers would go years without a new language feature or working around a weird browser quirk. This pushed our technical leaders to come up with creative solutions to circumvent the platform’s shortcomings. We invented bundling, polyfills, and transformation steps to make things work everywhere with less of a hassle.

Slowly, we moved towards some sort of consensus on what we need as an ecosystem. We now have TypeScript and Vite as clear preferences—pushing the needle of what it means to build consistent experiences for the web. Application frameworks have built whole ecosystems on top of them: SolidStart, Nuxt, Remix, and Analog are examples of incredible tools built with such primitives. We can say that Vite and TypeScript are tooling primitives that empower the creation of others in diverse ecosystems.

With bundling and transformation needs somewhat defined, it was only natural that framework authors would move their gaze to the next layer they needed to abstract: the server.

Server Primitives

The UnJS folks have been consistently building agnostic tooling that can be reused in different ecosystems. Thanks to them, we now have frameworks and libraries such as H3 (a minimal Node.js server framework built with TypeScript), which enables Nitro (a whole server runtime powered by Vite, and H3), that in its own turn enabled Vinxi (an application bundler and server runtime that abstracts Nitro and Vite).

Nitro is used already by three major frameworks: Nuxt, Analog, and SolidStart. While Vinxi is also used by SolidStart. This means that any platform which supports one of these, will definitely be able to support the others with zero additional effort.

This is not about taking a bigger slice of the cake. But making the cake bigger for everyone.

Frameworks, platforms, developers, and users benefit from it. We bet on our ecosystem together instead of working in silos with our monolithic solutions. Empowering our developer-users to gain transferable skills and truly choose the best tool for the job with less vendor lock-in than ever before.

Serverless Rejoins Conversation

Such initiatives have probably been noticed by serverless platforms like Netlify. With Platform Primitives, frameworks can leverage agnostic solutions for common necessities such as Incremental Static Regeneration (ISR), Image Optimization, and key/value (kv) storage.

As the name implies, Netlify Platform Primitives are a group of abstractions and helpers made available at a platform level for either frameworks or developers to leverage when using their applications. This brings additional functionality simultaneously to every framework. This is a big and powerful shift because, up until now, each framework would have to create its own solutions and backport such strategies to compatibility layers within each platform.

Moreover, developers would have to wait for a feature to first land on a framework and subsequently for support to arrive in their platform of choice. Now, as long as they’re using Netlify, those primitives are available directly without any effort and time put in by the framework authors. This empowers every ecosystem in a single measure.

Serverless means server infrastructure developers don’t need to handle. It’s not a misnomer, but a format of Infrastructure As A Service.

As mentioned before, Netlify Platform Primitives are three different features:

  1. Image CDN
    A content delivery network for images. It can handle format transformation and size optimization via URL query strings.
  2. Caching
    Basic primitives for their server runtime that help manage the caching directives for browser, server, and CDN runtimes smoothly.
  3. Blobs
    A key/value (KV) storage option is automatically available to your project through their SDK.

Let’s take a quick dive into each of these features and explore how they can increase our productivity with a serverless fullstack experience.

Image CDN

Every image in a /public can be served through a Netlify function. This means it’s possible to access it through a /.netlify/images path. So, without adding sharp or any image optimization package to your stack, deploying to Netlify allows us to serve our users with a better format without transforming assets at build-time. In a SolidStart, in a few lines of code, we could have an Image component that transforms other formats to .webp.

import { type JSX } from "solid-js";

const SITE_URL = "https://example.com";

interface Props extends JSX.ImgHTMLAttributes<HTMLImageElement> {
  format?: "webp" | "jpeg" | "png" | "avif" | "preserve";
  quality?: number | "preserve";
}

const getQuality = (quality: Props["quality"]) => {
  if (quality === "preserve") return"";
  return &q=${quality || "75"};
};

function getFormat(format: Props["format"]) {
  switch (format) {
    case "preserve":
      return"  ";
    case "jpeg":
      return &fm=jpeg;
    case "png":
      return &fm=png;
    case "avif":
      return &fm=avif;
    case "webp":
    default:
      return &fm=webp;
  }
}

export function Image(props: Props) {
  return (
    <img
      {...props}
      src={${SITE_URL}/.netlify/images?url=/${props.src}${getFormat(
        props.format
      )}${getQuality(props.quality)}}
    />
  );
}

Notice the above component is even slightly more complex than bare essentials because we’re enforcing some default optimizations. Our getFormat method transforms images to .webp by default. It’s a broadly supported format that’s significantly smaller than the most common and without any loss in quality. Our get quality function reduces the image quality to 75% by default; as a rule of thumb, there isn’t any perceivable loss in quality for large images while still providing a significant size optimization.

Caching

By default, Netlify caching is quite extensive for your regular artifacts - unless there’s a new deployment or the cache is flushed manually, resources will last for 365 days. However, because server/edge functions are dynamic in nature, there’s no default caching to prevent serving stale content to end-users. This means that if you have one of these functions in production, chances are there’s some caching to be leveraged to reduce processing time (and expenses).

By adding a cache-control header, you already have done 80% of the work in optimizing your resources for best serving users. Some commonly used cache control directives:

{
  "cache-control": "public, max-age=0, stale-while-revalidate=86400"

}
  • public: Store in a shared cache.
  • max-age=0: resource is immediately stale.
  • stale-while-revalidate=86400: if the cache is stale for less than 1 day, return the cached value and revalidate it in the background.
{
  "cache-control": "public, max-age=86400, must-revalidate"

}
  • public: Store in a shared cache.
  • max-age=86400: resource is fresh for one day.
  • must-revalidate: if a request arrives when the resource is already stale, the cache must be revalidated before a response is sent to the user.

Note: For more extensive information about possible compositions of Cache-Control directives, check the mdn entry on Cache-Control.

The cache is a type of key/value storage. So, once our responses are set with proper cache control, platforms have some heuristics to define what the key will be for our resource within the cache storage. The Web Platform has a second very powerful header that can dictate how our cache behaves.

The Vary response header is composed of a list of headers that will affect the validity of the resource (method and the endpoint URL are always considered; no need to add them). This header allows platforms to define other headers defined by location, language, and other patterns that will define for how long a response can be considered fresh.

The Vary response header is a foundational piece of a special header in Netlify Caching Primitive. The Netlify-Vary will take a set of instructions on which parts of the request a key should be based. It is possible to tune a response key not only by the header but also by the value of the header.

  • query: vary by the value of some or all request query parameters.
  • header: vary by the value of one or more request headers.
  • language: vary by the languages from the Accept-Language header.
  • country: vary by the country inferred from a GeoIP lookup on the request IP address.
  • cookie: vary by the value of one or more request cookie keys.

This header offers strong fine-control over how your resources are cached. Allowing for some creative strategies to optimize how your app will perform for specific users.

Blob Storage

This is a highly-available key/value store, it’s ideal for frequent reads and infrequent writes. They’re automatically available and provisioned for any Netlify Project.

It’s possible to write on a blob from your runtime or push data for a deployment-specific store. For example, this is how an Action Function would register a number of likes in store with SolidStart.

import { getStore } from "@netlify/blobs";
import { action } from "@solidjs/router";

export const upVote = action(async (formData: FormData) => {
  "use server";

  const postId = formData.get("id");
  const postVotes = formData.get("votes");

  if (typeof postId !== "string" || typeof postVotes !== "string") return;

  const store = getStore("posts");
  const voteSum = Number(postVotes) + 1)

  await store.set(postId, String(voteSum);

  console.log("done");
  return voteSum

});
Final Thoughts

With high-quality primitives, we can enable library and framework creators to create thin integration layers and adapters. This way, instead of focusing on how any specific platform operates, it will be possible to focus on the actual user experience and practical use-cases for such features. Monoliths and deeply integrated tooling make sense to build platforms fast with strong vendor lock-in, but that’s not what the community needs. Betting on the web platform is a more sensible and future-friendly way.

Let me know in the comments what your take is about unbiased tooling versus opinionated setups!

]]>
hello@smashingmagazine.com (Atila Fassina)
<![CDATA[Switching It Up With HTML’s Latest Control]]> https://smashingmagazine.com/2024/05/switching-it-up-html-latest-control/ https://smashingmagazine.com/2024/05/switching-it-up-html-latest-control/ Fri, 24 May 2024 12:00:00 GMT The web is no stranger to taking HTML elements and transforming them to look, act, and feel like something completely different. A common example of this is the switch, or toggle, component. We would hide a checkbox beneath several layers of styles, define the ARIA role as “switch,” and then ship. However, this approach posed certain usability issues around indeterminate states and always felt rather icky. After all, as the saying goes, the best ARIA is no ARIA.

Well, there is new hope for a native HTML switch to catch on.

Safari Technology Preview (TP) 185 and Safari 17.4 released with an under-the-radar feature, a native HTML switch control. It evolves from the hidden-checkbox approach and aims to make the accessibility and usability of the control more consistent.

<!-- This will render a native checkbox --//>
<input type="checkbox" />

<!-- Add the switch attribute to render a switch element --//>
<input type="checkbox" switch />
<input type="checkbox" checked switch />

Communication is one aspect of the control’s accessibility. Earlier in 2024, there were issues where the switch would not adjust to page zoom levels properly, leading to poor or broken visibility of the control. However, at the time I am writing this, Safari looks to have resolved these issues. Zooming retains the visual cohesion of the switch.

The switch attribute seems to take accessibility needs into consideration. However, this doesn’t prevent us from using it in inaccessible and unusable ways. As mentioned, mixing the required and indeterminate properties between switches and checkboxes can cause unexpected behavior for people trying to navigate the controls. Once again, Adrian sums things up nicely:

“The switch role does not allow mixed states. Ensure your switch never gets set to a mixed state; otherwise, well, problems.”

— Adrian Roselli

Internationalization (I18N): Which Way Is On?

Beyond the accessibility of the switch control, what happens when the switch interacts with different writing modes?

When creating the switch, we had to ensure the use of logical CSS to support different writing modes and directions. This is because a switch being in its right-most position (or inline ending edge) doesn’t mean “on” in some environments. In some languages — e.g., those that are written right-to-left — the left-most position (or inline starting edge) on the switch would likely imply the “on” state.

While we should be writing logical CSS by default now, the new switch control removes that need. This is because the control will adapt to its nearest writing-mode and direction properties. This means that in left-to-right environments, the switch’s right-most position will be its “on” state, and in right-to-left environments, its left-most position will be the “on” state.

See the Pen Safari Switch Control - Styling [forked] by @DanielYuschick.

Final Thoughts

As we continue to push the web forward, it’s natural for our tools to evolve alongside us. The switch control is a welcome addition to HTML for eliminating the checkbox hacks we’ve been resorting to for years.

That said, combining the checkbox and switch into a single input, while being convenient, does raise some concerns about potential markup combinations. Despite this, I believe this can ultimately be resolved with linters or by the browsers themselves under the hood.

Ultimately, having a native approach to switch components can make the accessibility and usability of the control far more consistent — assuming it’s ever supported and adopted for widespread use.

]]>
hello@smashingmagazine.com (Daniel Yuschick)
<![CDATA[Best Practices For Naming Design Tokens, Components And Variables]]> https://smashingmagazine.com/2024/05/naming-best-practices/ https://smashingmagazine.com/2024/05/naming-best-practices/ Thu, 23 May 2024 09:00:00 GMT Naming is hard. As designers and developers, we often struggle finding the right name — for a design token, colors, UI components, HTML classes, and variables. Sometimes, the names we choose are too generic, so it’s difficult to understand what exactly is meant. And sometimes they are too specific, leaving little room for flexibility and re-use.

In this post, we want to get to the bottom of this and explore how we can make naming more straightforward. How do we choose the right names? And which naming conventions work best? Let’s take a closer look.

Inspiration For Naming

If you’re looking for some inspiration for naming HTML classes, CSS properties, or JavaScript functions, Classnames is a wonderful resource jam-packed with ideas that get you thinking outside the box.

The site provides thematically grouped lists of words perfect for naming. You’ll find terms to describe different kinds of behavior, likeness between things, order, grouping, and association, but also themed collections of words that wouldn’t instantly come to one’s mind when it comes to code, among them words from nature, art, theater, music, architecture, fashion, and publishing.

Naming Conventions

What makes a good name? Javier Cuello summarized a set of naming best practices that help you name your layers, groups and components in a consistent and scalable way.

As Javier points out, a good name has a logical structure, is short, meaningful, and known by everyone, and not related to visual properties. He shares do’s and don’ts to illustrate how to achieve that and also takes a closer look at all the fine little details you need to consider when naming sizes, colors, groups, layers, and components.

Design Tokens Naming Playbook

How do you name and manage design tokens? To enhance your design tokens naming skills, Romina Kavcic created an interactive Design Tokens Naming Playbook. It covers everything from different approaches to naming structure to creating searchable databases, running naming workshops, and automation.

The playbook also features a naming playground where you can play with names simply by dragging and dropping. For more visual examples, also be sure to check out the Figma template. It includes components for all categories, allowing you to experiment with different naming structures.

Flexible Design Token Taxonomy

How to build a flexible design token taxonomy that works across different products? That was the challenge that the team at Intuit faced. The parent company of products such as Mailchimp, Quickbooks, TurboTax, and Mint developed a flexible token system that goes beyond the brand theme to serve as the foundational system for a wide array of products.

Nate Baldwin wrote a case study in which he shares valuable insights into the making of Intuit’s design token taxonomy. It dives deeper into the pain points of the old taxonomy system, the criteria they defined for the new system, and how it was created. Lots of takeaways for building your own robust and flexible token taxonomy are guaranteed.

Naming Colors

When you’re creating a color system, you need names for all its facets and uses. Names that everyone on the team can make sense of. But how to achieve that? How do you bring logic to a subjective topic like color? Jess Satell, Staff Content Designer for Adobe’s Spectrum Design System, shares how they master the challenge.

As Jess explains, the Spectrum color nomenclature uses a combination of color family classifier (e.g., blue or gray) paired with an incremental brightness value scale (50–900) to name colors in a way that is not only logical for everyone involved but also scalable and flexible as the system grows.

Another handy little helper when it comes to naming colors is Color Parrot. The Twitter bot is capable of naming and identifying the colors in any given image. Just mention the bot in a reply, and it will respond with a color palette.

Common Names For UI Components

Looking at what other people call similar things is a great way to start when you’re struggling with naming. And what better source could there be than other design systems? Before you end up in the design system rabbit hole, Iain Bean did the research for you and created the Component Gallery.

The Component Gallery is a collection of interface components from real-world design systems. It includes plenty of examples for more than 50 UI components — from accordion to visually hidden — and also lists other names that the UI components go by. A fantastic resource — not only with regards to naming.

Variables Taxonomy Map

A wonderful example of naming guidelines for a complex multi-brand, multi-themed design system comes from the Vodafone UK Design System team. Their Variables Taxonomy Map breaks down the anatomy and categorization of a design token into a well-orchestrated system of collections.

The map illustrates four collections required to support the system and connections between tokens — from brand and primitives to semantics and pages. It builds on top of Nathan Curtis’ work on naming design tokens and enables everyone to gather insight about where a token is used and what it represents, just from its name.

If you want to explore more approaches to naming design tokens, Vitaly compiled a list of useful Figma kits and resources that are worth checking out.

Design Token Names Inventory

Romina Kavcic created a handy little resource that is bound to give your design token naming workflow a power boost. The Design Token Names Inventory spreadsheet not only makes it easy to ensure consistent naming but also syncs directly to Figma.

The spreadsheet has a simple structure with four levels to give you a bird’s-eye view of all your design tokens. You can easily add rows, themes, and modes without losing track and filter your tokens. And while the spreadsheet itself is already a great solution to keep everyone involved on the same page, it plays out its real strength in combination with the Google Spreadsheets plugin or the Kernel plugin. Once installed, the changes you make in the spreadsheet are reflected in Figma. A real timesaver!

Want To Dive Deeper?

We hope these resources will come in handy as you tackle naming. If you’d like to dive deeper into design tokens, components, and design systems, we have a few friendly online workshops and SmashingConfs coming up:

We’d be absolutely delighted to welcome you to one of our special Smashing experiences — be it online or in person!

]]>
hello@smashingmagazine.com (Cosima Mielke)
<![CDATA[Modern CSS Layouts: You Might Not Need A Framework For That]]> https://smashingmagazine.com/2024/05/modern-css-layouts-no-framework-needed/ https://smashingmagazine.com/2024/05/modern-css-layouts-no-framework-needed/ Wed, 22 May 2024 15:00:00 GMT Establishing layouts in CSS is something that we, as developers, often delegate to whatever framework we’re most comfortable using. And even though it’s possible to configure a framework to get just what we need out of it, how often have you integrated an entire CSS library simply for its layout features? I’m sure many of us have done it at some point, dating back to the days of 960.gs, Bootstrap, Susy, and Foundation.

Modern CSS features have significantly cut the need to reach for a framework simply for its layout. Yet, I continue to see it happen. Or, I empathize with many of my colleagues who find themselves re-creating the same Grid or Flexbox layout over and over again.

In this article, we will gain greater control over web layouts. Specifically, we will create four CSS classes that you will be able to take and use immediately on just about any project or place where you need a particular layout that can be configured to your needs.

While the concepts we cover are key, the real thing I want you to take away from this is the confidence to use CSS for those things we tend to avoid doing ourselves. Layouts used to be a challenge on the same level of styling form controls. Certain creative layouts may still be difficult to pull off, but the way CSS is designed today solves the burdens of the established layout patterns we’ve been outsourcing and re-creating for many years.

What We’re Making

We’re going to establish four CSS classes, each with a different layout approach. The idea is that if you need, say, a fluid layout based on Flexbox, you have it ready. The same goes for the three other classes we’re making.

And what exactly are these classes? Two of them are Flexbox layouts, and the other two are Grid layouts, each for a specific purpose. We’ll even extend the Grid layouts to leverage CSS Subgrid for when that’s needed.

Within those two groups of Flexbox and Grid layouts are two utility classes: one that auto-fills the available space — we’re calling these “fluid” layouts — and another where we have greater control over the columns and rows — we’re calling these “repeating” layouts.

Finally, we’ll integrate CSS Container Queries so that these layouts respond to their own size for responsive behavior rather than the size of the viewport. Where we’ll start, though, is organizing our work into Cascade Layers, which further allow you to control the level of specificity and prevent style conflicts with your own CSS.

Setup: Cascade Layers & CSS Variables

A technique that I’ve used a few times is to define Cascade Layers at the start of a stylesheet. I like this idea not only because it keeps styles neat and organized but also because we can influence the specificity of the styles in each layer by organizing the layers in a specific order. All of this makes the utility classes we’re making easier to maintain and integrate into your own work without running into specificity battles.

I think the following three layers are enough for this work:

@layer reset, theme, layout;

Notice the order because it really, really matters. The reset layer comes first, making it the least specific layer of the bunch. The layout layer comes in at the end, making it the most specific set of styles, giving them higher priority than the styles in the other two layers. If we add an unlayered style, that one would be added last and thus have the highest specificity.

Related: “Getting Started With Cascade Layers” by Stephanie Eckles.

Let’s briefly cover how we’ll use each layer in our work.

Reset Layer

The reset layer will contain styles for any user agent styles we want to “reset”. You can add your own resets here, or if you already have a reset in your project, you can safely move on without this particular layer. However, do remember that un-layered styles will be read last, so wrap them in this layer if needed.

I’m just going to drop in the popular box-sizing declaration that ensures all elements are sized consistently by the border-box in accordance with the CSS Box Model.

@layer reset {
  *,
  *::before,
  *::after {
    box-sizing: border-box;
  }

  body {
    margin: 0;
  }
}

Theme Layer

This layer provides variables scoped to the :root element. I like the idea of scoping variables this high up the chain because layout containers — like the utility classes we’re creating — are often wrappers around lots of other elements, and a global scope ensures that the variables are available anywhere we need them. That said, it is possible to scope these locally to another element if you need to.

Now, whatever makes for “good” default values for the variables will absolutely depend on the project. I’m going to set these with particular values, but do not assume for a moment that you have to stick with them — this is very much a configurable system that you can adapt to your needs.

Here are the only three variables we need for all four layouts:

@layer theme {
  :root {
    --layout-fluid-min: 35ch;
    --layout-default-repeat: 3;
    --layout-default-gap: 3vmax;
  }
}

In order, these map to the following:

Notice: The variables are prefixed with layout-, which I’m using as an identifier for layout-specific values. This is my personal preference for structuring this work, but please choose a naming convention that fits your mental model — naming things can be hard!

Layout Layer

This layer will hold our utility class rulesets, which is where all the magic happens. For the grid, we will include a fifth class specifically for using CSS Subgrid within a grid container for those possible use cases.

@layer layout {  
  .repeating-grid {}
  .repeating-flex {}
  .fluid-grid {}
  .fluid-flex {}

  .subgrid-rows {}
}

Now that all our layers are organized, variables are set, and rulesets are defined, we can begin working on the layouts themselves. We will start with the “repeating” layouts, one based on CSS Grid and the other using Flexbox.

Repeating Grid And Flex Layouts

I think it’s a good idea to start with the “simplest” layout and scale up the complexity from there. So, we’ll tackle the “Repeating Grid” layout first as an introduction to the overarching technique we will be using for the other layouts.

Repeating Grid

If we head into the @layout layer, that’s where we’ll find the .repeating-grid ruleset, where we’ll write the styles for this specific layout. Essentially, we are setting this up as a grid container and applying the variables we created to it to establish layout columns and spacing between them.

.repeating-grid {
  display: grid;
  grid-template-columns: repeat(var(--layout-default-repeat), 1fr);
  gap: var(--layout-default-gap);
}

It’s not too complicated so far, right? We now have a grid container with three equally sized columns that take up one fraction (1fr) of the available space with a gap between them.

This is all fine and dandy, but we do want to take this a step further and turn this into a system where you can configure the number of columns and the size of the gap. I’m going to introduce two new variables scoped to this grid:

  • --_grid-repeat: The number of grid columns.
  • --_repeating-grid-gap: The amount of space between grid items.

Did you notice that I’ve prefixed these variables with an underscore? This was actually a JavaScript convention to specify variables that are “private” — or locally-scoped — before we had const and let to help with that. Feel free to rename these however you see fit, but I wanted to note that up-front in case you’re wondering why the underscore is there.

.repeating-grid {
  --_grid-repeat: var(--grid-repeat, var(--layout-default-repeat));
  --_repeating-grid-gap: var(--grid-gap, var(--layout-default-gap));

  display: grid;
  grid-template-columns: repeat(var(--layout-default-repeat), 1fr);
  gap: var(--layout-default-gap);
}

Notice: These variables are set to the variables in the @theme layer. I like the idea of assigning a global variable to a locally-scoped variable. This way, we get to leverage the default values we set in @theme but can easily override them without interfering anywhere else the global variables are used.

Now let’s put those variables to use on the style rules from before in the same .repeating-grid ruleset:

.repeating-grid {
  --_grid-repeat: var(--grid-repeat, var(--layout-default-repeat));
  --_repeating-grid-gap: var(--grid-gap, var(--layout-default-gap));

  display: grid;
  grid-template-columns: repeat(var(--_grid-repeat), 1fr);
  gap: var(--_repeating-grid-gap);
}

What happens from here when we apply the .repeating-grid to an element in HTML? Let’s imagine that we are working with the following simplified markup:

<section class="repeating-grid">
  <div></div>
  <div></div>
  <div></div>
</section>

If we were to apply a background-color and height to those divs, we would get a nice set of boxes that are placed into three equally-sized columns, where any divs that do not fit on the first row automatically wrap to the next row.

Time to put the process we established with the Repeating Grid layout to use in this Repeating Flex layout. This time, we jump straight to defining the private variables on the .repeating-flex ruleset in the @layout layer since we already know what we’re doing.

.repeating-flex {
  --_flex-repeat: var(--flex-repeat, var(--layout-default-repeat));
  --_repeating-flex-gap: var(--flex-gap, var(--layout-default-gap));
}

Again, we have two locally-scoped variables used to override the default values assigned to the globally-scoped variables. Now, we apply them to the style declarations.

.repeating-flex {
  --_flex-repeat: var(--flex-repeat, var(--layout-default-repeat));
  --_repeating-flex-gap: var(--flex-gap, var(--layout-default-gap));

  display: flex;
  flex-wrap: wrap;
  gap: var(--_repeating-flex-gap);
}

We’re only using one of the variables to set the gap size between flex items at the moment, but that will change in a bit. For now, the important thing to note is that we are using the flex-wrap property to tell Flexbox that it’s OK to let additional items in the layout wrap into multiple rows rather than trying to pack everything in a single row.

But once we do that, we also have to configure how the flex items shrink or expand based on whatever amount of available space is remaining. Let’s nest those styles inside the parent ruleset:

.repeating-flex {
  --_flex-repeat: var(--flex-repeat, var(--layout-default-repeat));
  --_repeating-flex-gap: var(--flex-gap, var(--layout-default-gap));

  display: flex;
  flex-wrap: wrap;
  gap: var(--_repeating-flex-gap);

  > * {
    flex: 1 1 calc((100% / var(--_flex-repeat)) - var(--_gap-repeater-calc));
  }
}

If you’re wondering why I’m using the universal selector (*), it’s because we can’t assume that the layout items will always be divs. Perhaps they are <article> elements, <section>s, or something else entirely. The child combinator (>) ensures that we’re only selecting elements that are direct children of the utility class to prevent leakage into other ancestor styles.

The flex shorthand property is one of those that’s been around for many years now but still seems to mystify many of us. Before we unpack it, did you also notice that we have a new locally-scoped --_gap-repeater-calc variable that needs to be defined? Let’s do this:

.repeating-flex {
  --_flex-repeat: var(--flex-repeat, var(--layout-default-repeat));
  --_repeating-flex-gap: var(--flex-gap, var(--layout-default-gap));

  /* New variables */
  --_gap-count: calc(var(--_flex-repeat) - 1);
  --_gap-repeater-calc: calc(
    var(--_repeating-flex-gap) / var(--_flex-repeat) * var(--_gap-count)
  );

  display: flex;
  flex-wrap: wrap;
  gap: var(--_repeating-flex-gap);

  > * {
    flex: 1 1 calc((100% / var(--_flex-repeat)) - var(--_gap-repeater-calc));
  }
}

Whoa, we actually created a second variable that --_gap-repeater-calc can use to properly calculate the third flex value, which corresponds to the flex-basis property, i.e., the “ideal” size we want the flex items to be.

If we take out the variable abstractions from our code above, then this is what we’re looking at:

.repeating-flex {
  display: flex;
  flex-wrap: wrap;
  gap: 3vmax

  > * {
    flex: 1 1 calc((100% / 3) - calc(3vmax / 3 * 2));
  }
}

Hopefully, this will help you see what sort of math the browser has to do to size the flexible items in the layout. Of course, those values change if the variables’ values change. But, in short, elements that are direct children of the .repeating-flex utility class are allowed to grow (flex-grow: 1) and shrink (flex-shrink: 1) based on the amount of available space while we inform the browser that the initial size (i.e., flex-basis) of each flex item is equal to some calc()-ulated value.

Because we had to introduce a couple of new variables to get here, I’d like to at least explain what they do:

  • --_gap-count: This stores the number of gaps between layout items by subtracting 1 from --_flex-repeat. There’s one less gap in the number of items because there’s no gap before the first item or after the last item.
  • --_gap-repeater-calc: This calculates the total gap size based on the individual item’s gap size and the total number of gaps between items.

From there, we calculate the total gap size more efficiently with the following formula:

calc(var(--_repeating-flex-gap) / var(--_flex-repeat) * var(--_gap-count))

Let’s break that down further because it’s an inception of variables referencing other variables. In this example, we already provided our repeat-counting private variable, which falls back to the default repeater by setting the --layout-default-repeat variable.

This sets a gap, but we’re not done yet because, with flexible containers, we need to define the flex behavior of the container’s direct children so that they grow (flex-grow: 1), shrink (flex-shrink: 1), and with a flex-basis value that is calculated by multiplying the repeater by the total number of gaps between items.

Next, we divide the individual gap size (--_repeating-flex-gap) by the number of repetitions (--_flex-repeat)) to equally distribute the gap size between each item in the layout. Then, we multiply that gap size value by one minus the total number of gaps with the --_gap-count variable.

And that concludes our repeating grids! Pretty fun, or at least interesting, right? I like a bit of math.

Before we move to the final two layout utility classes we’re making, you might be wondering why we want so many abstractions of the same variable, as we start with one globally-scoped variable referenced by a locally-scoped variable which, in turn, can be referenced and overridden again by yet another variable that is locally scoped to another ruleset. We could simply work with the global variable the whole time, but I’ve taken us through the extra steps of abstraction.

I like it this way because of the following:

  1. I can peek at the HTML and instantly see which layout approach is in use: .repeating-grid or .repeating-flex.
  2. It maintains a certain separation of concerns that keeps styles in order without running into specificity conflicts.

See how clear and understandable the markup is:

<section class="repeating-flex footer-usps">
  <div></div>
  <div></div>
  <div></div>
</section>

The corresponding CSS is likely to be a slim ruleset for the semantic .footer-usps class that simply updates variable values:

.footer-usps {
  --flex-repeat: 3;
  --flex-gap: 2rem;
}

This gives me all of the context I need: the type of layout, what it is used for, and where to find the variables. I think that’s handy, but you certainly could get by without the added abstractions if you’re looking to streamline things a bit.

Fluid Grid And Flex Layouts

All the repeating we’ve done until now is fun, and we can manipulate the number of repeats with container queries and media queries. But rather than repeating columns manually, let’s make the browser do the work for us with fluid layouts that automatically fill whatever empty space is available in the layout container. We may sacrifice a small amount of control with these two utilities, but we get to leverage the browser’s ability to “intelligently” place layout items with a few CSS hints.

Fluid Grid

Once again, we’re starting with the variables and working our way to the calculations and style rules. Specifically, we’re defining a variable called --_fluid-grid-min that manages a column’s minimum width.

Let’s take a rather trivial example and say we want a grid column that’s at least 400px wide with a 20px gap. In this situation, we’re essentially working with a two-column grid when the container is greater than 820px wide. If the container is narrower than 820px, the column stretches out to the container’s full width.

If we want to go for a three-column grid instead, the container’s width should be about 1240px wide. It’s all about controlling the minimum sizing values in the gap.

.fluid-grid {
  --_fluid-grid-min: var(--fluid-grid-min, var(--layout-fluid-min));
  --_fluid-grid-gap: var(--grid-gap, var(--layout-default-gap));
}

That establishes the variables we need to calculate and set styles on the .fluid-grid layout. This is the full code we are unpacking:

 .fluid-grid {
  --_fluid-grid-min: var(--fluid-grid-min, var(--layout-fluid-min));
  --_fluid-grid-gap: var(--grid-gap, var(--layout-default-gap));

  display: grid;
  grid-template-columns: repeat(
    auto-fit,
    minmax(min(var(--_fluid-grid-min), 100%), 1fr)
  );
  gap: var(--_fluid-grid-gap);
}

The display is set to grid, and the gap between items is based on the --fluid-grid-gap variable. The magic is taking place in the grid-template-columns declaration.

This grid uses the repeat() function just as the .repeating-grid utility does. By declaring auto-fit in the function, the browser automatically packs in as many columns as it possibly can in the amount of available space in the layout container. Any columns that can’t fit on a line simply wrap to the next line and occupy the full space that is available there.

Then there’s the minmax() function for setting the minimum and maximum width of the columns. What’s special here is that we’re nesting yet another function, min(), within minmax() (which, remember, is nested in the repeat() function). This a bit of extra logic that sets the minimum width value of each column somewhere in a range between --_fluid-grid-min and 100%, where 100% is a fallback for when --_fluid-grid-min is undefined or is less than 100%. In other words, each column is at least the full 100% width of the grid container.

The “max” half of minmax() is set to 1fr to ensure that each column grows proportionally and maintains equally sized columns.

See the Pen Fluid grid [forked] by utilitybend.

That’s it for the Fluid Grid layout! That said, please do take note that this is a strong grid, particularly when it is combined with modern relative units, e.g. ch, as it produces a grid that only scales from one column to multiple columns based on the size of the content.

Fluid Flex

We pretty much get to re-use all of the code we wrote for the Repeating Flex layout for the Fluid Flex layout, but only we’re setting the flex-basis of each column by its minimum size rather than the number of columns.

.fluid-flex {
  --_fluid-flex-min: var(--fluid-flex-min, var(--layout-fluid-min));
  --_fluid-flex-gap: var(--flex-gap, var(--layout-default-gap));

  display: flex;
  flex-wrap: wrap;
  gap: var(--_fluid-flex-gap);

  > * {
    flex: 1 1 var(--_fluid-flex-min);
  }
}

That completes the fourth and final layout utility — but there’s one bonus class we can create to use together with the Repeating Grid and Fluid Grid utilities for even more control over each layout.

Optional: Subgrid Utility

Subgrid is handy because it turns any grid item into a grid container of its own that shares the parent container’s track sizing to keep the two containers aligned without having to redefine tracks by hand. It’s got full browser support and makes our layout system just that much more robust. That’s why we can set it up as a utility to use with the Repeating Grid and Fluid Grid layouts if we need any of the layout items to be grid containers for laying out any child elements they contain.

Here we go:

.subgrid-rows {
  > * {
    display: grid;
    gap: var(--subgrid-gap, 0);
    grid-row: auto / span var(--subgrid-rows, 4);
    grid-template-rows: subgrid;
  }
}

We have two new variables, of course:

  • --subgrid-gap: The vertical gap between grid items.
  • --subgrid-rows The number of grid rows defaulted to 4.

We have a bit of a challenge: How do we control the subgrid items in the rows? I see two possible methods.

Method 1: Inline Styles

We already have a variable that can technically be used directly in the HTML as an inline style:

<section class="fluid-grid subgrid-rows" style="--subgrid-rows: 4;">
  <!-- items -->
</section>

This works like a charm since the variable informs the subgrid how much it can grow.

Method 2: Using The :has() Pseudo-Class

This approach leads to verbose CSS, but sacrificing brevity allows us to automate the layout so it handles practically anything we throw at it without having to update an inline style in the markup.

Check this out:

.subgrid-rows {
  &:has(> :nth-child(1):last-child) { --subgrid-rows: 1; }
  &:has(> :nth-child(2):last-child) { --subgrid-rows: 2; }
  &:has(> :nth-child(3):last-child) { --subgrid-rows: 3; }
  &:has(> :nth-child(4):last-child) { --subgrid-rows: 4; }
  &:has(> :nth-child(5):last-child) { --subgrid-rows: 5; }
  /* etc. */

  > * {
    display: grid;
    gap: var(--subgrid-gap, 0);
    grid-row: auto / span var(--subgrid-rows, 5);
    grid-template-rows: subgrid;
  }
}

The :has() selector checks if a subgrid row is the last child item in the container when that item is either the first, second, third, fourth, fifth, and so on item. For example, the second declaration:

&:has(> :nth-child(2):last-child) { --subgrid-rows: 2; }

…is pretty much saying, “If this is the second subgrid item and it happens to be the last item in the container, then set the number of rows to 2.”

Whether this is too heavy-handed, I don’t know; but I love that we’re able to do it in CSS.

The final missing piece is to declare a container on our children. Let’s give the columns a general class name, .grid-item, that we can override if we need to while setting each one as a container we can query for the sake of updating its layout when it is a certain size (as opposed to responding to the viewport’s size in a media query).

:is(.fluid-grid:not(.subgrid-rows),
.repeating-grid:not(.subgrid-rows),
.repeating-flex, .fluid-flex) {
    > * {
    container: var(--grid-item-container, grid-item) / inline-size;
  }
}

That’s a wild-looking selector, but the verbosity is certainly kept to a minimum thanks to the :is() pseudo-class, which saves us from having to write this as a larger chain selector. It essentially selects the direct children of the other utilities without leaking into .subgrid-rows and inadvertently selecting its direct children.

The container property is a shorthand that combines container-name and container-type into a single declaration separated by a forward slash (/). The name of the container is set to one of our variables, and the type is always its inline-size (i.e., width in a horizontal writing mode).

The container-type property can only be applied to grid containers — not grid items. This means we’re unable to combine it with the grid-template-rows: subgrid value, which is why we needed to write a more complex selector to exclude those instances.

Demo

Check out the following demo to see how everything comes together.

See the Pen Grid system playground [forked] by utilitybend.

The demo is pulling in styles from another pen that contains the full CSS for everything we made together in this article. So, if you were to replace the .fluid-flex classname from the parent container in the HTML with another one of the layout utilities, the layout will update accordingly, allowing you to compare them.

Those classes are the following:

  • .repeating-grid,
  • .repeating-flex,
  • .fluid-grid,
  • .fluid-flex.

And, of course, you have the option of turning any grid items into grid containers using the optional .subgrid-rows class in combination with the .repeating-grid and .fluid-grid utilities.

Conclusion: Write Once And Repurpose

This was quite a journey, wasn’t it? It might seem like a lot of information, but we made something that we only need to write once and can use practically anywhere we need a certain type of layout using modern CSS approaches. I strongly believe these utilities can not only help you in a bunch of your work but also cut any reliance on CSS frameworks that you may be using simply for its layout configurations.

This is a combination of many techniques I’ve seen, one of them being a presentation Stephanie Eckles gave at CSS Day 2023. I love it when people handcraft modern CSS solutions for things we used to work around. Stephanie’s demonstration was clean from the start, which is refreshing as so many other areas of web development are becoming ever more complex.

After learning a bunch from CSS Day 2023, I played with Subgrid on my own and published different ideas from my experiments. That’s all it took for me to realize how extensible modern CSS layout approaches are and inspired me to create a set of utilities I could rely on, perhaps for a long time.

By no means am I trying to convince you or anyone else that these utilities are perfect and should be used everywhere or even that they’re better than <framework-du-jour>. One thing that I do know for certain is that by experimenting with the ideas we covered in this article, you will get a solid feel of how CSS is capable of making layout work much more convenient and robust than ever.

Create something out of this, and share it in the comments if you’re willing — I’m looking forward to seeing some fresh ideas!

]]>
hello@smashingmagazine.com (Brecht De Ruyte)
<![CDATA[Hidden vs. Disabled In UX]]> https://smashingmagazine.com/2024/05/hidden-vs-disabled-ux/ https://smashingmagazine.com/2024/05/hidden-vs-disabled-ux/ Tue, 21 May 2024 08:20:00 GMT Both hiding and disabling features can be utterly confusing to users. And for both, we need very, very good reasons. Let’s take a closer look at what we need to consider when it comes to hiding and disabling — and possible alternatives that help enhance the UX.

This article is part of our ongoing series on design patterns. It’s also an upcoming part of the 10h-video library on Smart Interface Design Patterns 🍣 and the upcoming live UX training as well. Use code BIRDIE to save 15% off.

Show What’s Needed, Declutter The Rest

You’ve probably been there before: Should you hide or disable a feature? When we hide a feature, we risk hurting discoverability. When we disable it without any explanation, we risk that users get frustrated. So, what’s the best way to design for those instances when some options might be irrelevant or unavailable to users?

As a rule of thumb, disable if you want the user to know a feature exists but is unavailable. Hide if the value shown is currently irrelevant and can’t be used. But never hide buttons or key filters by default as users expect them to persist.

Unlike hidden features, disabled features can help users learn the UI, e.g., to understand the benefits of an upgrade. So, instead of removing unavailable options or buttons, consider disabling them and allowing the user to “Hide all unavailable options.” Be sure to explain why a feature is disabled and also how to re-enable it.

Another thing to watch out for: When we allow users to switch between showing and hiding a feature, we also need to ensure the switch doesn’t cause any layout shifts.

For both hiding and disabling, we need very thorough considerations of available alternatives, e.g., enabled buttons, read-only state, better empty states, hide/reveal accordions, error messages, and customization. We need to show what’s needed and de-clutter the rest.

Whenever possible, I try to keep buttons and features in their default state — enabled, accessible, and legible. When a user interacts with that feature, we can explain why they can’t use it, how to enable it, and how to keep it enabled. Possible exceptions are confirmation codes and loading/processing states.

Hiding vs. Disabling Roadmap

As Sam Salomon suggests, if you’re unsure whether hiding or disabling is the best option for your use case, ask yourself the following question: “Will a given user ever be able to interact with this element?” Depending on your answer, follow the steps below.

✅ Yes

Disable it (as disabled buttons or read-only state).
↳ For temporary restrictions or filter incompatibility.
↳ When a value or status is relevant but not editable.
↳ When an action isn’t available yet (e.g., “Export in progress...”).

🚫 No

Hide it (remove from a toolbar, collapse in accordion).
↳ E.g., due to permissions, access controls, safety, and security.
↳ For inaccessible features: e.g., admin buttons, overrides.
↳ Hide such controls by default and reveal them once a condition is met.

Key Takeaways
  • Hiding important features hurts their discoverability.
  • Disabling features is frustrating without an explanation.
  • But some options might be irrelevant/unavailable to users.
  • Users might expect a feature to exist but won’t find it.
  • We need to show what’s needed and de-clutter the rest.
  • Avoid disruptive layout shifts as you show and hide features.
  • Don’t remove unavailable options or buttons automatically.
  • Instead, disable them and allow it to “Hide all unavailable options.”
  • Allow users to hide sections with a lot of disabled functionality.
  • Explain why a feature is disabled and how to re-enable it.
Hidden vs. Disabled In Design Systems

The design systems below provide useful real-world examples of how products design their hidden and disabled states.

Useful Resources Meet Smart Interface Design Patterns

If you are interested in similar insights around UX, take a look at Smart Interface Design Patterns, our 10h-video course with 100s of practical examples from real-life projects — with a live UX training later this year. Everything from mega-dropdowns to complex enterprise tables — with 5 new segments added every year. Jump to a free preview.

Meet Smart Interface Design Patterns, our video course on interface design & UX.

100 design patterns & real-life examples.
10h-video course + live UX training. Free preview.

]]>
hello@smashingmagazine.com (Vitaly Friedman)
<![CDATA[Building A User Segmentation Matrix To Foster Cross-Org Alignment]]> https://smashingmagazine.com/2024/05/building-user-segmentation-matrix-foster-cross-org-alignment/ https://smashingmagazine.com/2024/05/building-user-segmentation-matrix-foster-cross-org-alignment/ Fri, 17 May 2024 17:00:00 GMT Do you recognize this situation? The marketing and business teams talk about their customers, and each team thinks they have the same understanding of the problem and what needs to be done. Then, they’re including the Product and UX team in the conversation around how to best serve a particular customer group and where to invest in development and marketing efforts. They’ve done their initial ideation and are trying to prioritize, but this turns into a long discussion with the different teams favoring different areas to focus on. Suddenly, an executive highlights that instead of this customer segment, there should be a much higher focus on an entirely different segment — and the whole discussion starts again.

This situation often arises when there is no joint-up understanding of the different customer segments a company is serving historically and strategically. And there is no shared understanding beyond using the same high-level terms. To reach this understanding, you need to dig deeper into segment definitions, goals, pain points, and jobs-to-be-done (JTBD) so as to enable the organization to make evidence-based decisions instead of having to rely on top-down prioritization.

The hardest part about doing the right thing for your user or customers (please note I’m aware these terms aren’t technically the same, but I’m using them interchangeably in this article so as to be useful to a wider audience) often starts inside your own company and getting different teams with diverging goals and priorities to agree on where to focus and why.

But how do you get there — thinking user-first AND ensuring teams are aligned and have a shared mental model of primary and secondary customer segments?

Personas vs Segments

To explore that further, let’s take a brief look at the most commonly applied techniques to better understand customers and communicate this knowledge within organizations.

Two frequently employed tools are user personas and user segmentation.

Product/UX (or non-demographic) personas aim to represent the characteristics and needs of a certain type of customer, as well as their motivations and experience. The aim is to illustrate an ideal customer and allow teams to empathize and solve different use cases. Marketing (or demographic) personas, on the other hand, traditionally focus on age, socio-demographics, education, and geography but usually don’t include needs, motivations, or other contexts. So they’re good for targeting but not great for identifying new potential solutions or helping teams prioritize.

In contrast to personas, user segments illustrate groups of customers with shared needs, characteristics, and actions. They are relatively high-level classifications, deliberately looking at a whole group of needs without telling a detailed story. The aim is to gain a broader overview of the wider market’s wants and needs.

Tony Ulwick, creator of the “jobs-to-be-done” framework, for example, creates outcome-based segmentations, which are quite similar to what this article is proposing. Other types of segmentations include geographic, psychographic, demographic, or needs-based segmentations. What all segmentations, including the user segmentation matrix, have in common is that the segments are different from each other but don‘t need to be mutually exclusive.

As Simon Penny points out, personas and segments are tools for different purposes. While customer segments help us understand a marketplace or customer base, personas help us to understand more about the lived experience of a particular group of customers within that marketplace.

Both personas and segmentations have their applications, but this article argues that using a matrix will help you prioritize between the different segments. In addition, the key aspect here is the co-creation process that fosters understanding across departments and allows for more transparent decision-making. Instead of focusing only on the outcome, the process of getting there is what matters for alignment and collaboration across teams. Let’s dig deeper into how to achieve that.

User Segmentation Matrix: 101
At its core, the idea of the user segmentation matrix is meant to create a shared mental model across teams and departments of an organization to enable better decision-making and collaboration.

And it does that by visualizing the relevance and differences between a company’s customer segments. Crucially, input into the matrix comes from across teams as the process of co-creation plays an essential part in getting to a shared understanding of the different segments and their relevance to the overall business challenge.

Additionally, this kind of matrix follows the principle of “just enough, not too much” to create meaning without going too deep into details or leading to confusion. It is about pulling together key elements from existing tools and methods, such as User Journeys or Jobs-to-be-done, and visualizing them in one place.

For a high-level first overview, see the matrix scaffolding below.

Case Study: Getting To A Shared Mental Model Across Teams

Let’s look at the problem through a case study and see how building a user segmentation matrix helped a global data products organization gain a much clearer view of its customers and priorities.

Here is some context. The organization was partly driven by NGO principles like societal impact and partly by economic concerns like revenue and efficiencies. Its primary source of revenue was raw data and data products, and it was operating in a B2B setting. Despite operating for several decades already, its maturity level in terms of user experience and product knowledge was low, while the amount of different data outputs and services was high, with a whole bouquet of bespoke solutions for individual clients. The level of bespoke solutions that had to be maintained and had grown organically over time had surpassed the “featuritis” stage and turned utterly unsustainable.

And you probably guessed it: The business focus had traditionally been “What can we offer and sell?” instead of “What are our customers trying to solve?”

That means there were essentially two problems to figure out:

  1. Help executives and department leaders from Marketing through Sales, Business, and Data Science see the value of customer-first product thinking.
  2. Establish a shared mental model of the key customer segments to start prioritizing with focus and reduce the completely overgrown service offering.

For full disclosure, here’s a bit about my role in this context: I was there in a fractional product leader role at first, after running a discovery workshop, which then developed into product strategy work and eventually a full evaluation of the product portfolio according to user & business value.

Approach

So how did we get to that outcome? Basically, we spent an afternoon filling out a table with different customer segments, presented it to a couple of stakeholders, and everyone was happy — THE END. You can stop reading…

Or not, because from just a few initial conversations and trying to find out if there were any existing personas, user insights, or other customer data, it became clear that there was no shared mental model of the organization’s customer segments.

At the same time, the Business and Account management teams, especially, had a lot of contact with new and existing customers and knew the market and competition well. And the Marketing department had started on personas. However, they were not widely used and weren’t able to act as that shared mental model across different departments.

So, instead of thinking customer-first the organization was operating “inside-out first,” based on the services they offered. With the user segmentation matrix, we wanted to change this perspective and align all teams around one shared canvas to create transparency around user and business priorities.

But How To Proceed Quickly While Taking People Along On The Journey?

Here’s the approach we took:

1. Gather All Existing Research

First, we gathered all user insights, customer feedback, and data from different parts of the organization and mapped them out on a big board (see below). Initially, we really tried to map out all existing documentation, including links to in-house documents and all previous attempts at separating different user groups, analytics data, revenue figures, and so on.

The key here was to speak to people in different departments to understand how they were currently thinking about their customers and to include the terms and documentation they thought most relevant without giving them a predefined framework. We used the dimensions of the matrix as a conversation guide, e.g., asking about their definitions for key user groups and what makes them distinctly different from others.

2. Start The Draft Scaffolding

Secondly, we created the draft matrix with assumed segments and some core elements that have proven useful in different UX techniques.

In this step, we started to make sense of all the information we had collected and gave the segments “draft labels” and “draft definitions” based on input from the teams, but creating this first draft version within the small working group. The aim was to reduce complexity, settle on simple labels, and introduce primary vs secondary groups based on the input we received.

We then made sure to run this summarized draft version past the stakeholders for feedback and amends, always calling out the DRAFT status to ensure we had buy-in across teams before removing that label. In addition to interviews, we also provided direct access to the workboard for stakeholders to contribute asynchronously and in their own time and to give them the option to discuss with their own teams.

3. Refine

In the next step, we went through several rounds of “joint sense-making” with stakeholders from across different departments. At this stage, we started coloring in the scaffolding version of the matrix with more and more detail. We also asked stakeholders to review the matrix as a whole and comment on it to make sure the different business areas were on board and to see the different priorities between, e.g., primary and secondary user groups due to segment size, pain points, or revenue numbers.

4. Prompt

We then promoted specifically for insights around segment definitions, pain points, goals, jobs to be done, and defining differences to other segments. Once the different labels and the sorting into primary versus secondary groups were clear, we tried to make sure that we had similar types of information per segment so that it would be easy to compare different aspects across the matrix.

5. Communicate

Finally, we made sure the core structure reached different levels of leadership. While we made sure to include senior stakeholders in the process throughout, this step was essential prior to circulating the matrix widely across the organization.

However, due to the previous steps, we had gone through, at this point, we were able to assure senior leadership that their teams had contributed and reviewed several times, so getting that final alignment was easy.

We did this in a team of two external consultants and three in-house colleagues, who conducted the interviews and information gathering exercises in tandem with us. Due to the size and global nature of the organization and various different time zones to manage, it took around 3 weeks of effort, but 3 months in time due to summer holidays and alignment activities. So we did this next to other work, which allowed us to be deeply plugged into the organization and avoid blind spots due to having both internal and external perspectives.

Building on in-house advocates with deep organizational knowledge and subject-matter expertise was a key factor and helped bring the organization along much better than purely external consultants could have done.

User Segmentation Matrix: Key Ingredients

So, what are the dimensions we included in this mapping out of primary and secondary user segments?

The dimensions we used were the following:

  1. Segment definition
    Who is this group?
    Define it in a simple, straightforward way so everyone understands — NO acronyms or abbreviations. Further information to include that’s useful if you have it: the size of the segment and associated revenue.
  2. Their main goals
    What are their main goals?
    Thinking outside-in and from this user groups perspective these would be at a higher level than the specific JTBD field, big picture and longer term.
  3. What are their “Jobs-to-be-done”?
    Define the key things this group needs in order to get their own work done (whether that’s currently available in your service or not; if you don’t know this, it’s time for some discovery). Please note this is not a full JTBD mapping, but instead seeks to call out exemplary practical tasks.
  4. How are they different from other segments?
    Segments should be clearly different in their needs. If they’re too similar, they might not be a separate group.
  5. Main pain points
    What are the pain points for each segment? What issues are they currently experiencing with your service/product? Note the recurring themes.
  6. Key contacts in the organization
    Who are the best people holding knowledge about this user segment?
    Usually, these would be the interview partners who contributed to the matrix, and it helps to not worry too much about ownership or levels here; it could be from any department, and often, the Business or Product org are good starting points.

This is an example of a user segmentation matrix:

Outcomes & Learning

What we found in this work is that seeing all user segments mapped out next to each other helped focus the conversation and create a shared mental model that switched the organization’s perspective to outside-in and customer-first.

Establishing the different user segment names and defining primary versus secondary segments created transparency, focus, and a shared understanding of priorities.

Building this matrix based on stakeholder interviews and existing user insights while keeping the labeling in DRAFT mode, we encouraged feedback and amends and helped everyone feel part of the process. So, rather than being a one-time set visualization, the key to creating value with this matrix is to encourage conversation and feedback loops between teams and departments.

In our case, we made sure that every stakeholder (at different levels within the organization, including several people from the executive team) had seen this matrix at least twice and had the chance to input. Once we then got to the final version, we were sure that we had an agreement on the terminology, issues, and priorities.

Below is the real case study example (with anonymized inputs):

Takeaways And What To Watch Out For

So what did this approach help us achieve?

  1. It created transparency and helped the Sales and Business teams understand how their asks would roughly be prioritized — seeing the other customer segments in comparison (especially knowing the difference between primary vs secondary segments).
  2. It shifted the thinking to customer-first by providing an overview for the executive team (and everyone else) to start thinking about customers rather than business units and see new opportunities more clearly.
  3. It highlighted the need to gather more customer insights and better performance data, such as revenue per segment, more detailed user tracking, and so on.

In terms of the challenges we faced when conducting and planning this work, there are a few things to watch out for:

We found that due to the size and global nature of the organization, it took several rounds of feedback to align with all stakeholders on the draft versions. So, the larger the size of your organization, the more buffer time to include (or the ability to change interview partners at short notice).

If you’re planning to do this in a startup or mid-sized organization, especially if they’ve got the relevant information available, you might need far less time, although it will still make sense to carefully select the contributors.

Having in-house advocates who actively contributed to the work and conducted interviews was a real benefit for alignment and getting buy-in across the organization, especially when things started getting political.

Gathering information from Marketing, Product, Business, Sales and Leadership and sticking with their terms and definitions initially was crucial, so everyone felt their inputs were heard and saw it reflected, even if amended, in the overall matrix.

And finally, a challenge that’s not to be underestimated is the selection of those asked to input — where it’s a tightrope walk between speed and inclusion.

We found that a “snowball system” worked well, where we initially worked with the C-level sponsor to define the crucial counterparts at the leadership level and have them name 3-4 leads in their organization, looking after different parts of the organization. These leaders were asked for their input and their team’s input in interviews and through asynchronous access to the joint workboard.

What’s In It For You?

To summarize, the key benefits of creating a user segmentation matrix in your organization are the following:

  • Thinking outside-in and user-first.
    Instead of thinking this is what you offer, your organization starts to think about solving real customer problems — the matrix is your GPS view of your market (but like any GPS system, don’t forget to update it occasionally).
  • Clarity and a shared mental model.
    Everyone is starting to use the same language, and there’s more clarity about what you offer per customer segment. So, from Sales through to Business and Product, you’re speaking to users and their needs instead of talking about products and services (or even worse, your in-house org structure). Shared clarity drastically reduces meeting and decision time and allows you to do more impactful work.
  • Focus, and more show than tell.
    Having a matrix helps differentiate between primary, secondary, and other customer segments and visualizes these differences for everyone.
When Not To Use It

If you already have a clearly defined set of customer segments that your organization is in agreement on and working towards — good for you; you won’t need this and can rely on your existing data.

Another case where you will likely not need this full overview is when you’re dealing with a very specific customer segment, and there is good alignment between the teams serving this group in terms of focus, priorities, and goals.

Organizations that will see the highest value in this exercise are those who are not yet thinking outside-in and customer-first and who still have a traditional approach, starting from their own services and dealing with conflicting priorities between departments.

Next Steps

And now? You’ve got your beautiful and fully aligned customer segmentation matrix ready and done. What’s next? In all honesty, this work is never done, and this is just the beginning.

If you have been struggling with creating an outside-in perspective in your organization, the key is to make sure that it gets communicated far and wide.

For example, make sure to get your executive sponsors to talk about it in their rounds, do a road show, or hold open office hours where you can present it to anyone interested and give them a chance to ask questions. Or even better, present it at the next company all-hands, with the suggestion to start building up an insights library per customer segment.

If this was really just the starting point to becoming more product-led, then the next logical step is to assess and evaluate the current product portfolio. The aim is to get clarity around which services or products are relevant for which customers. Especially in product portfolios plagued by “featuritis,” it makes sense to do a full audit, evaluate both user and business value, and clean out your product closet.

If you’ve seen gaps and blind spots in your matrix, another next step would be to do some deep dives, customer interviews, and discovery work to fill those. And as you continue on that journey towards more customer-centricity, other tools from the UX and product tool kit, like mapping out user journeys and establishing a good tracking system and KPIs, will be helpful so you can start measuring customer satisfaction and continue to test and learn.

Like a good map, it helps you navigate and create a shared understanding across departments. And this is its primary purpose: getting clarity and focus across teams to enable better decision-making. The process of co-creating a living document that visualizes customer segments is at least as important here as the final outcome.

Further Reading

]]>
hello@smashingmagazine.com (Talke Hoppmann-Walton)
<![CDATA[Beyond CSS Media Queries]]> https://smashingmagazine.com/2024/05/beyond-css-media-queries/ https://smashingmagazine.com/2024/05/beyond-css-media-queries/ Thu, 16 May 2024 15:00:00 GMT Media queries have been around almost as long as CSS itself — and with no flex, no grid, no responsive units, and no math functions, media queries were the most pragmatic choice available to make a somewhat responsive website.

In the early 2010s, with the proliferation of mobile devices and the timely publication of Ethan Marcotte’s classic article “Responsive Web Design”, media queries became much needed for crafting layouts that could morph across screens and devices. Even when the CSS Flexbox and Grid specifications rolled out, media queries for resizing never left.

While data on the actual usage of media queries is elusive, the fact that they have grown over time with additional features that go well beyond the viewport and into things like user preferences continues to make them a bellwether ingredient for responsive design.

Today, there are more options and tools in CSS for establishing layouts that allow page elements to adapt to many different conditions besides the size of the viewport. Some are more widely used — Flexbox and Grid for certain — but also things like responsive length units and, most notably, container queries, a concept we will come back to in a bit.

But media queries are still often the de facto tool that developers reach for. Maybe it’s muscle memory, inconsistent browser support, or that we’re stuck in our ways, but adoption of the modern approaches we have for responsive interfaces seems slow to take off.

To be clear, I am all for media queries. They play a significant role in the work we do above and beyond watching the viewport size to make better and more accessible user experiences based on a user’s OS preferences, the type of input device they’re using, and more.

But should media queries continue to be the gold standard for responsive layouts? As always, it depends, but

It is undeniable that media queries have evolved toward accessibility solutions, making space for other CSS features to take responsibility for responsiveness.

The Problem With Media Queries

Media queries seemed like a great solution for most responsive-related problems, but as the web has grown towards bigger and more complex layouts, the limits of media queries are more prevalent than ever.

Problem #1: They Are Viewport-Focused

When writing media query breakpoints where we want the layout to adapt, we only have access to the viewport’s properties, like width or orientation. Sometimes, all we need is to tweak a font size, and the viewport is our best bud for that, but most times, context is important.

Components on a page share space with others and are positioned relative to each other according to normal document flow. If all we have access to is the viewport width, knowing exactly where to establish a particular breakpoint becomes a task of compromises where some components will respond well to the adapted layout while others will need additional adjustments at that specific breakpoint.

So, there we are, resizing our browser and looking for the correct breakpoint where our content becomes too squished.

The following example probably has the worst CSS you will see in a while, but it helps to understand one of the problems with media queries.

That same layout in mobile simply does not work. Tables have their own set of responsive challenges as it is, and while there is no shortage of solutions, we may be able to consider another layout using modern techniques that are way less engineered.

We are doing much more than simply changing the width or height of elements! Border colors, element visibility, and flex directions need to be changed, and it can only be done through a media query, right? Well, even in cases where we have to completely switch a layout depending on the viewport size, we can better achieve it with container queries.

Again, Problem #1 of media queries is that they only consider the viewport size when making decisions and are completely ignorant of an element’s surrounding context.

That may not be a big concern if all we’re talking about is a series of elements that are allowed to take up the full page width because the full page width is very much related to the viewport size, making media queries a perfectly fine choice for making adjustments.

See the Pen Responsive Cards Using Media Queries [forked] by Monknow.

But say we want to display those same elements as part of a multi-column layout where they are included in a narrow column as an <aside> next to a larger column containing a <main> element. Now we’re in trouble.

A more traditional solution is to write a series of media queries depending on where the element is used and where its content breaks. But media queries completely miss the relationship between the <main> and <aside> elements, which is a big deal since the size of one influences the size of the other according to normal document flow.

See the Pen Responsive Cards Using Media Queries Inside Container [forked] by Monknow.

The .cards element is in the context of the <aside> element and is squished as a result of being in a narrow column. What would be great is to change the layout of each .card component when the .cards element that contains them reaches a certain size rather than when the viewport is a certain size.

That’s where container queries come into play, allowing us to conditionally apply styles based on an element’s size. We register an element as a “container,” which, in our current example, is the unordered list containing the series of .card components. We’re essentially giving the parent selector a great deal of power to influence the current layout.

.cards {
  container-name: cards;
}

Container queries monitor an element by its size, and we need to tell the browser exactly how to interpret that size by giving .cards a container-type, which can be the container’s size (i.e., in the block and inline directions) or its inline-size (i.e., the total length in the inline direction). There’s a normal value that removes sizing considerations but allows the element to be queried by its styles.

.cards {
  container-name: cards;
  container-type: inline-size;
}

We can simplify things down a bit using the container shorthand property.

.cards {
  container: cards / inline-size;
}

Now, we can adjust the layout of the .card components when the .cards container is a certain inline size. Container queries use the same syntax as media queries but use the @container at-rule instead of @media.

.cards {
  container: cards / inline-size;
}

@container cards (width < 700px) {
  .cards li {
    flex-flow: column;
  }
}

Now, each .card is a flexible container that flows in the column direction when the width of the .cards container is less than 700px. Any wider than that, we have the same to lay them out in a row direction instead.

See the Pen Responsive Cards Using Container Queries [forked] by Monknow.

Style queries are a cousin to container queries in the sense that we can query the container’s styles and conditionally apply style changes to its children, say changing a child element’s color to white when the container’s background-color is set to a dark color. We’re still in the early days, and support for style queries and browser support is still evolving.

I hope this gives you a sense of how amazing it is that we have this context-aware way of establishing responsive layouts. Containers are a completely new idea in CSS (although we’ve used the term synonymously with “parent element” for ages) that is novel and elegant.

So, Are Media Queries Useless?

NO! While media queries have been the go-to solution for responsive design, their limitations are glaringly obvious now that we have more robust tools in CSS that are designed to solve those limits.

That doesn’t make media queries obsolete — merely a different tool that’s part of a larger toolset for building responsive interfaces. Besides, media queries still address vital accessibility concerns thanks to their ability to recognize a user’s visual and motion preferences — among other settings — at the operating system level.

So, yes, keep using media queries! But maybe reach for them sparingly since CSS has a lot more to offer us.

]]>
hello@smashingmagazine.com (Juan Diego Rodríguez)
<![CDATA[Transforming The Relationship Between Designers And Developers]]> https://smashingmagazine.com/2024/05/transforming-relationship-between-designers-developers/ https://smashingmagazine.com/2024/05/transforming-relationship-between-designers-developers/ Wed, 15 May 2024 10:00:00 GMT In the forever-shifting landscape of design and technology, some rare artifacts surprisingly never change.

Throughout the last two decades, we have witnessed the astonishing evolution of creative tooling, methodologies, and working practices. However, after all of this advancement, we still have clients asking to make the logo bigger, designers despairing as their creations are built with not quite the exact amount of bottom-margin, and developers going crazy about last-minute design changes.

Quite frankly, I’ve had enough. So join me in a parenting-style-hands-on-hips pose of disdain, roll up your sleeves, and let’s fix this mess together, once and for all!

Why Is This Still An Important Topic?

Ultimately, the quality of your designer-developer relations will have a vital impact on the quality of your product. In turn, this will impact customer experience (be it internal or external).

Customer experience is everything, and these days the smallest of chinks can create an even bigger dent in the business itself.

It may not even be an obvious or noticeable issue. Over time, those moments of misunderstanding in your team could result in a series of micro-inconsistencies that are felt by the customer yet sneak underneath the radar of quality assurance.

Perhaps you’ll catch these things during user research, but in this scenario, you’d be playing catch-up instead of advancing forward.

To cut a long story short, it could be slowing you down in the race against your competitors and costing you more money in the process.

So, with that in mind, let’s get stuck into the techniques that can steer us in the right direction and inspire everyone on the team to deliver the slickest of user experiences together.

Working Culture

In my opinion, process improvements may only get you so far. The working culture in your organization will heavily influence the output of your digital teams. Whilst the subject of culture is incredibly vast, there are a few key elements that I think are hugely important to foster a greater level of collaboration between design and developers:

  1. Alignment on the goals of the project and/or business.
  2. Encouraging a more “robotic” attitude to feedback. Of course, you can be passionate about what you do, but when it comes to feedback, I always try to encourage people to respond with logic before emotion.
  3. Communication: Ultimately, you have to trust people to be proactive. You can have a great process, but the gaps and edge cases will still slip through the net unless you have people who are open and ready to prod each other when issues arise.

This may seem like common sense to many of us, but many organizations (big ones, too!) still operate without this crucial foundation to motivate and support their teams.

However, it is essential to be honest with yourself and consider the role you play within your team. Even if you think you have already fulfilled these criteria, I’d encourage you to investigate this further to ensure everyone feels the same. It can be as simple as having a 121 discussion with each member of the team, or you could even send out short questionnaires to gauge your workplace’s suitability for an optimal designer and developer collaboration.

You might be surprised by what you hear back from people. Treat any criticism as gold dust. It’s an opportunity to improve.

Once you’ve created this foundation within your organization, it’s important to maintain and protect it. Keep reviewing it regularly, and make sure that anyone joining the team will be able to fit in. This leads us nicely on to…

Hiring

If you’re scaling your team, maintaining quality can always be a challenge as you grow. Despite the challenges, it’s important to continue hiring people who have a positive and empathetic attitude to ensure you can maintain this foundation within your workplace.

In order to gauge this, I would like to include the following interview questions.

Developer

Begin by showing a sample screenshot of your product or a specially crafted concept design:

“You’ve just built X, and the designer wants to change Y. How do you respond?”

Follow up:

“The designer and PM reject your suggestion because of ___. How do you respond?”

Designer

Begin by showing a sample screenshot of your product or a specially crafted concept design:

“The developer says, “We can’t build X quickly; can we do Y instead to deliver faster?” How do you react?”

Follow up:

“The product owner says they are then disappointed with the design. How do you react?”

I recommend asking these kinds of questions in the middle or towards the end of the interview so you have already built rapport. If the candidate is at ease, they are more likely to let slip any negative attitudes that lurk beneath the surface.

I’ve asked interview questions like these to many designers and developers, and every so often, they will openly criticize and stereotype each other with a smile on their faces. I’ve even seen some candidates become visibly frustrated as they recount real-life scenarios from their own experiences.

How you score this is more difficult. Ultimately, skills and work ethic are the most important things, so concerning answers to these questions may not necessarily lead to an outright rejection but perhaps flag something you may need to work on with the candidate if they do later join your team.

Hopefully, in most cases, the stronger candidates you speak to will naturally provide balanced and conscientious responses to these tests of character!

Process

We talked a bit about hiring, but I’d imagine many people who need this article are more likely to be in the midst of a designer-developer flame-war as opposed to trying to prevent one in the future!

So, what can we do process-wise to keep things flowing?

Provided that there is plenty of early and ongoing collaboration in your workflow, there is no absolute right or wrong answer. It’s about what fits your team and your product best. Ultimately, you need to discard the silos of the past and start working together as a team early on.

  • Developers would typically be the last people to get involved, but they should be involved from the start to guide technical feasibility and provide their own ideas.
  • Designers are often more involved in the beginning but can often drift away before the end of a release. However, we need to keep them onboard and get them to play with the product so we can keep making it even better!

It’s important to be open-minded about the solutions. Alas, I have even worked in organizations where different teams have different approaches. Bearing that in mind, here are some good places to start in terms of exploring what might work for your workplace.

Scoping

When new features are on the horizon, getting everyone involved in these discussions is crucial.

Sometimes, it can be difficult for developers to detach from the current sprint and think ahead, but it’s important that we have their guidance, and it is ultimately going to save them (and the whole team) time further down the line.

Scoping can appear in many different forms across the spectrum of agile methodologies out there. It’s not my intention to cover any of these and discuss all the positives and negatives of each (that’d make this into a book, and not one that anyone would like to read!); in fact, I am deliberately not mentioning any of them. This article is ultimately about people, and the people we need at this early stage are not just the stakeholders and a product manager. We need designers and developers shaping these early discussions for the following reasons:

  • They will bring their own ideas.
  • They will visualize the idea very quickly and assess its feasibility.
  • They will connect the concept with other parts of the domain.
  • They will also (albeit rarely!) prevent an impossible dream or daft idea from growing on the face of the business like a festering wart.

Another Perspective On Scoping: SquaredUp

In order to take a deeper dive into the subject of scoping, I spoke to Dave Clarke, product manager at SquaredUp.

“Developers are looped in during the design stage, and we’ll test interactive mockups with the engineering team as well as other internal stakeholders before going out to external audiences for feedback. This means that when a feature is ready to be built by an engineer, they’re already really familiar with what we’re building”

— Dave Clarke

Back in late 2018, I met the SquaredUp team at an open day in their UK hub in Maidenhead. I was impressed by the quality of their product, considering it was a very technical audience. It looked beautiful, and you could tell that they went the extra mile in terms of collaboration. Not only do they involve developers in the design phase, but they get them involved even earlier than that.

“We send engineers to events so they can talk to customers and hear their pain points first-hand. This helps foster a real appreciation and understanding of the ‘user’ and ensures designers/developers/PMs are all coming at a problem with a solid understanding of the issue from the user’s perspective.”

— Dave Clarke

This brings us back again to that all-important foundation. Alignment on goals is key, and what better way to reinforce that message than by getting everyone involved in hearing directly from the end users of your product?

Design Presentations

Once the wheels are in motion on the big new thing, many teams like to have the designer present their work for forthcoming iteration(s) to the team. This allows everyone to have a say and get excited about what is coming up.

Once again, there are many organizations that would simply agree on the design between stakeholders and designers alone. From the developer perspective, this is incredibly frustrating. Not only will it result in a lower-quality output, but it will also make developers feel as though their opinion doesn’t matter.

With my developer hat on, though, I absolutely love these kinds of sessions. They allow us to question the details, suggest alternatives, and consider how we slice stuff up into smaller bundles of value that can be released faster.

With my design hat on, it caters to my need to think about the bigger picture. It’s not always practical to design iteratively, but in these sessions, we can all get together and appreciate the end-to-end experience.

Typically, we allow the designer time to talk through everything, allowing for questions throughout, and give everyone a chance to dive in and bring their ideas to the table. However, do what works for your team. If you have a designer who wants to present, take all questions at the end and then make changes afterward, do that. If you have one who likes handling lots of questions throughout and makes changes live, go with that.

Perhaps even give it your own identity, too. In my current workplace, one of the squads calls it Design Time and in our squad, we decided to open the name to a poll, and thus (with one cheeky addition to the poll from a colleague) the Itty Bitty Refinement Committee was born!

Managing Conflict

However, these kinds of sessions do have the potential to get sidetracked. So, as with any meeting, it is essential to have a clear agenda and ensure that good facilitation prevents things from going off-piste. If there are conflicts, I always try to find resolutions by considering where we might find the answers. For example,

  • Can we look at our analytics?
  • Which option is a better fit for our company goals?
  • Could we do an A/B test to see what is more effective?

When people bring ideas to the table, it’s always important to acknowledge them positively and seek further exploration. Sometimes, we can agree on an approach quickly, and on other occasions, we can defer the discussion to a later refinement session.

Sharing Responsibilities

In my opinion, there is also a gray area between designers and developers, where it often isn’t clear who holds responsibility. This is a big risk because, in many organizations, essential aspects can be completely forgotten.

From my past experience, there are two key areas where I see this happening often. So this may not be exhaustive, but I encourage you to think about these and then ask yourself: Is there anything else — specific to my organization — that could have fallen into this void between our designers and developers?

See if you can identify these risks and agree on a way of working together to ensure they are tackled effectively.

Animations

Nowadays, many dev teams are working on JavaScript-heavy applications, and most of us will have the power of CSS transitions at our disposal. Yet, I frequently land on new projects where they aren’t being leveraged to enhance the customer experience.

Animations can be quite time-consuming to create using many design tools. In particular, I often find that loading states are quite fiddly to prototype in some cases.

In my recent work at Floww, I collaborated with designer Hidemi Wenn on an animated progress bar. For the first version, Hidemi had begun with an idea crafted in After Effects. I replicated this in a CodePen and suggested adding some bubbles to highlight the changes in the numbers.

Note: Of course, CodePen is just one example of this. There are many other tools out there, such as Storybook, that can also allow us to build and collaborate on ideas quickly.

See the Pen Bar Chart of Destiny [forked] by Chris Day.

This allowed Hidemi to see her creation working in the browser early — before it had been fully implemented into the product — and we then collaborated further to make more enhancements.

“Working together like this was awesome! We could easily bounce around ideas, and tweaking the animation was a breeze.”

— Hidemi Wenn, Product Designer at Floww

Pairing is often between developers, but why not jump on a call and pair with a designer whilst you write the CSS? This gives them full transparency, and you can collaborate together.

Nowadays, we have amazing tools at our disposal to collaborate, and yet still, so many designers and developers elect to operate in silos.

Accessibility

One of the first things I do when joining any existing digital project is to spin up Wave (an accessibility testing tool) and subsequently slump into my seat in despair.

Accessibility is something that always suffers as a result of a designer/developer standoff. Some might say it’s the realm of design, while others would argue it’s quite a technical thing and, therefore, lives in dev land. The truth is it is a shared responsibility.

Take something like :focus, for example. Whenever I review code, this is something I always check and often discover it’s missing. Ask the developer, and they’ll say, “We didn’t have designs for it.” Well, perhaps, ask the designer to create them, just as I’d expect the designer to query an unimplemented state they had designed for.

We should scrutinize each other’s work and continue to channel our inner robot to respond with logic when it comes to constructive criticism. Keep encouraging everyone to embrace feedback because that is the gold dust that makes our product shine brighter.

During Implementation

Having steered our way together through the implementation of our features, at some point, we begin to approach the time to release our features into the wild. We are on the final stretch, and thus, it’s time for developers to stage a reverse-design presentation!

Whilst mentoring developers on this subject, I always remind them not to take the feedback personally.

Likewise, I ask designers to never hold back. Be persnickety (in a kind way!) and ensure all your concerns are addressed.

It’s only natural for a developer to behave defensively in these scenarios. As a result, designers may hold back on some of the feedback they provide in order to prevent upsetting the developer.

Developers are often very vocal, and if you are tasked with delivering a barrage of design feedback to them, it can appear daunting and make designers fearful of a backlash.

Prevent the silo. Perhaps have a third party, such as the product owner/manager, attend the meetings. They can diffuse any situation by referring us all back to the business value.

I’ve also witnessed rare cases where the developer has nodded and agreed with all the feedback and then just hasn’t implemented any of it afterward! So, make sure it’s all captured in whatever project management tools you use so you can follow up on the status. Sometimes, it’s easy to forget to do this when the changes are so small, so often (in my current team), we might create a single ticket on our board to implement all the feedback changes as opposed to creating a work item for each.

Another common issue I’ve found is that I’ve met many designers who don’t actually ever test out the products that they design. For me, they are missing out on the opportunity to further hone their work, and to learn.

If you’re a designer, ensure that you can log in to the app/website. Get a test account from someone, and try to break stuff!

Once all the feedback is in, we can create more work items to give our product those magical finishing touches and ship our masterpiece to the World.

Design Systems

Having mentioned focus states earlier on, you were probably already thinking about design systems before this heading came along! Of course, the design system plays a key role in helping us maintain that consistency, and ensuring accessibility concerns are baked-in to our library of beautiful components.

There are many, many articles about design systems out there already but here, I am going to just consider them in the context of the working relationship.

As the design system encourages reuse, it encourages us to think about other teams in our organization and be more mindful.

If the basic building blocks are covered, we can focus on solving more complex challenges together. I think this is also a really important value to get your teams on board with.

Design systems can also cause friction. Not everyone will get on board with it. Some designers will feel as though it restricts their creativity. Some developers will be frustrated at having to update the design system instead of cracking on with their own features.

In my opinion, these attitudes will not only slow you down but could harm the working culture of your business. Nowadays, I’d say it’s absolutely crucial for any product team (big or small) to have a design system and have the majority of your team buying into it.

I’ve been present at organizations where the design system is neglected, and in these cases, it actually ends up worse than not having one at all. You really need the majority of your team to be committed to it; otherwise, some people will go off-piste and keep reinventing the wheel (probably without those focus states!).

Another Perspective On Design Systems: GOV.UK

The GDS (Government Digital Service) of the UK has built a design system that serves a vast spectrum of different services and tech stacks. An enormous challenge, which is almost certain to be of interest in our quest for knowledge! So, I got in touch with product designer Ed Horsford who has worked on a series of government services that make use of this.

“GDS provides the GOV.UK Prototype Kit, so as a designer, I can create something in the kit, make full use of the functionality of the design system, and point developers towards the prototype.”

— Edward Horsford

Whilst many other organizations are now making use of tools such as Figma’s excellent Dev Mode feature to streamline design handover, this still requires naming conventions to be lined up between the codebase and the Figma component library. What’s impressive about GDS’ approach here is that the provision of their own prototyping tool makes it absolutely clear to developers which components need to be used. However, the availability of a great design system tooling doesn’t always guarantee a smooth outcome, as Ed explains:

“It can be a bit of a mind-shift for developers new to the UK government or using design systems in general — they may default to hand coding the HTML and CSS to match a design, rather than using the components from the design system to match the prototype.”

“If there is a bespoke requirement outside of the design system, then I will always call it out early so I can discuss it with the team.”

— Edward Horsford

Once again, this takes us back to the importance of communication. In a landscape where a design system must be deployed amongst many different teams, it’s up to the designers and developers to scrutinize each other’s work.

It was great to hear that as a designer, Ed was actively looking at the front-end code to assist the developer, ensuring the design system was respected so that all of its many benefits could be embedded into the product.

Crisis Mode

I appreciate that much of the advice in this article requires planning and a fair bit of trial and error. So what do you do if your designers and developers are already engulfed in a mass brawl that needs to be quelled?

In these scenarios, I think it is an ideal moment to pause and simply ask each member of the team: What is our goal? What are we working towards?

If people are angry, in some ways, it’s a good thing because you know they care. People who care should always be open to a bit of a reset. Openly discuss what everyone wants, and you’ll probably be surprised at how aligned people really are; I always go back to this fundamental and work onwards from there.

Sometimes, we get so tangled up in the details we forget what is truly important.

Apathy

For every angry team, there are probably many more that just don’t give a crap. For me, this is a far worse situation.

Every problem described in this article could be present. The designers make mockups, the designers build them without question, and everyone gets paid. Who needs to question anything? It’s just a job, right?

Can we really fix this?

Well, in my opinion, you are going to need a much deeper dive into company culture to try and revive that team spirit. I have worked at places like this in the past, and it is very challenging to try and implement solutions when the people are just not bought into the vision of the organization.

Whether this is feasible or not depends on your role and the organization itself. I have walked away from situations like this in the past because I didn’t feel as though the organization was willing to change or even be able to acknowledge the problem.

Conclusion

The dynamic between designers and developers is a subject that has always been of great interest to me, as I’ve worked in both roles as well as being an agency owner.

I’m confident as the years continue to progress, this will become less of a problem as the world of work continues to gravitate towards greater levels of inclusivity, honesty, and openness. The foundations of great company culture are so crucial to ensuring that designers and developers can unite and take on the world side-by-side on behalf of your organization.

For now, though, in today’s fragmented and divided world, you can gain a true competitive advantage by leveraging the power of a harmonious digital team built on the foundations of your organizational values.

Go smash it!

]]>
hello@smashingmagazine.com (Chris Day)
<![CDATA[Why Designers Aren’t Understood]]> https://smashingmagazine.com/2024/05/designers-business-ux-language/ https://smashingmagazine.com/2024/05/designers-business-ux-language/ Tue, 14 May 2024 12:00:00 GMT As designers, especially in large enterprises, we often might feel misunderstood and underappreciated. It might feel like every single day you have to fight for your users, explain yourself and defend your work. It’s unfair, exhausting, painful and frustrating.

Let’s explore how to present design work, explain design decisions and get stakeholders on your side — and speak the language that other departments understand.

As designers, we might feel slightly frustrated by the language that often dominates business meetings. As Jason Fried has noted, corporate language is filled with metaphors of fighting. Companies “conquer” the market, they “capture” mindshare, they “target” customers, they “destroy” the competition, they want to attract more “eye-balls, get users “hooked, and increase “life-time value.

Designers, on the other hand, don’t speak in such metaphors. We speak of how to “reduce” friction, “improve” consistency, “empower” users, “enable and help” users, “meet” their expectations, “bridge the gap”, “develop empathy”, understand “user needs, design an “inclusive” experience.

In many ways, these words are the direct opposite of the metaphors commonly used in corporate environments and business meetings. So no wonder that our beliefs and principles might feel misunderstood and underappreciated. A way to solve is to be deliberate when choosing words you use in the big meeting. In fact, it’s all about speaking the right language.

This article is part of our ongoing series on design patterns. It’s also an upcoming part of the 10h-video library on Smart Interface Design Patterns 🍣 and the upcoming live UX training as well. Use code BIRDIE to save 15% off.

Speaking The Right Language

As designers, we often use design-specific terms, such as consistency, friction and empathy. Yet to many managers, these attributes don’t map to any business objectives at all, often leaving them utterly confused about the actual real-life impact of our UX work.

One way out that changed everything for me is to leave UX vocabulary at the door when entering a business meeting. Instead, I try to explain design work through the lens of the business, often rehearsing and testing the script ahead of time.

When presenting design work in a big meeting, I try to be very deliberate and strategic in the choice of the words I’m using. I won’t be speaking about attracting “eye-balls” or getting users “hooked”. It’s just not me. But I won’t be speaking about reducing “friction” or improving “consistency” either.

Instead, I tell a story.

A story that visualizes how our work helps the business. How design team has translated business goals into specific design initiatives. How UX can reduce costs. Increase revenue. Grow business. Open new opportunities. New markets. Increase efficiency. Extend reach. Mitigate risk. Amplify word of mouth.

And how we’ll measure all that huge impact of our work.

Typically, it’s broken down into eight sections:

🎯 Goals ← Business targets, KRs we aim to achieve.
💥 Translation ← Design initiatives, iterations, tests.
🕵️ Evidence ← Data from UX research, pain points.
🧠 Ideas ← Prioritized by an impact/effort-matrix.
🕹 Design work ← Flows, features, user journeys.
📈 Design KPIs ← How we’ll measure/report success.
🐑 Shepherding ← Risk management, governance.
🔮 Future ← What we believe are good next steps.

Key Takeaways

🤔 Businesses rarely understand the impact of UX.
🤔 UX language is overloaded with ambiguous terms.
🤔 Business can’t support confusing initiatives.
✅ Leave UX language and UX jargon at the door.
✅ Explain UX work through the lens of business goals.

🚫 Avoid “consistency”, “empathy”, “simplicity”.
🚫 Avoid “cognitive load”, “universal design”.
🚫 Avoid “lean UX”, “agile”, “archetypes”, “JTBD”.
🚫 Avoid “stakeholder management”, “UX validation”.
🚫 Avoid abbreviations: HMW, IxD, PDP, PLP, WCAG.

✅ Explain how you’ll measure success of your work.
✅ Speak of business value, loyalty, abandonment.
✅ Show risk management, compliance, governance.
✅ Refer to cost reduction, efficiency, growth.
✅ Present accessibility as industry-wide best practice.

Next time you walk in a meeting, pay attention to your words. Translate UX terms in a language that other departments understand. It might not take long until you’ll see support coming from everywhere — just because everyone can now clearly see how your work helps them do their work better.

Useful Resources Meet Smart Interface Design Patterns

If you are interested in similar insights around UX, take a look at Smart Interface Design Patterns, our 10h-video course with 100s of practical examples from real-life projects — with a live UX training later this year. Everything from mega-dropdowns to complex enterprise tables — with 5 new segments added every year. Jump to a free preview.

Meet Smart Interface Design Patterns, our video course on interface design & UX.

100 design patterns & real-life examples.
10h-video course + live UX training. Free preview.

]]>
hello@smashingmagazine.com (Vitaly Friedman)
<![CDATA[The Times You Need A Custom @property Instead Of A CSS Variable]]> https://smashingmagazine.com/2024/05/times-need-custom-property-instead-css-variable/ https://smashingmagazine.com/2024/05/times-need-custom-property-instead-css-variable/ Mon, 13 May 2024 08:00:00 GMT We generally use a CSS variable as a placeholder for some value we plan to reuse — to avoid repeating the same value and to easily update that value across the board if it needs to be updated.

:root { 
  --mix: color-mix(in srgb, #8A9B0F, #fff 25%);
}

div {
  box-shadow: 0 0 15px 25px var(--mix);
}

We can register custom properties in CSS using @property. The most common example you’ll likely find demonstrates how @property can animate the colors of a gradient, something we’re unable to do otherwise since a CSS variable is recognized as a string and what we need is a number format that can interpolate between two numeric values. That’s where @property allows us to define not only the variable’s value but its syntax, initial value, and inheritance, just like you’ll find documented in CSS specifications.

For example, here’s how we register a custom property called --circleSize, which is formatted as a percentage value that is set to 10% by default and is not inherited by child elements.

@property --circleSize {
  syntax: "<percentage>";
  inherits: false;
  initial-value: 10%;
}

div { /* red div */
  clip-path: circle(var(--circleSize) at center bottom);
  transition: --circleSize 300ms linear;
}

section:hover div { 
  --circleSize: 125%; 
}

In this example, a circle() function is used to clip the <div> element into — you guessed it — a circle. The size value of the circle()’s radius is set to the registered custom property, --circleSize, which is then independently changed on hover using a transition. The result is something close to Material Design’s ripple effect, and we can do it because we’ve told CSS to treat the custom property as a percentage value rather than a string:

See the Pen CSS @property [forked] by Preethi Sam.

The freedom to define and spec our own CSS properties gives us new animating superpowers that were once only possible with JavaScript, like transitioning the colors of a gradient.

Here’s an idea I have that uses the same basic idea as the ripple, only it chains multiple custom properties together that are formatted as colors, lengths, and angle degrees for a more complex animation where text slides up the container as the text changes colors.

See the Pen Text animation with @property [forked] by Preethi Sam.

Let’s use this demo as an exercise to learn more about defining custom properties with the @property at-rule, combining what we just saw in the ripple with the concept of interpolating gradient values.

The HTML
<div class="scrolling-text">
  <div class="text-container">
    <div class="text">
      <ruby>壹<rt>one</rt></ruby>
      <ruby>蜀<rt>two</rt></ruby>
      <ruby>兩<rt>three</rt></ruby>
    </div>
  </div>
</div>

The HTML contains Chinese characters we’re going to animate. These Chinese characters are marked up with <ruby> tags so that their English translations can be supplied in <rt> tags. The idea is that .scrolling-text is the component’s parent container and, in it, is a child element holding the sliding text characters that allow the characters to slide in and out of view.

Vertical Sliding

In CSS, let’s make the characters slide vertically on hover. What we’re making is a container with a fixed height we can use to clip the characters out of view when they overflow the available space.

.scrolling-text {
  height: 1lh;
  overflow: hidden;
  width: min-content;
}
.text-container:has(:hover, :focus) .text {
  transform: translateY(-2lh) ;
}
.text {
  transition: transform 2.4s ease-in-out;
}

See the Pen Vertical text transition [forked] by Preethi Sam.

Setting the .scrolling-text container’s width to min-content gives the characters a tight fit, stacking them vertically in a single column. The container’s height is set 1lh. And since we’ve set overflow: hidden on the container, only one character is shown in the container at any given point in time.

Tip: You can also use the HTML <pre> element or either the white-space or text-wrap properties to control how text wraps.

On hover, the text moves -2lh, or double the height of a single text character in the opposite, or up, direction. So, basically, we’re sliding things up by two characters in order to animate from the first character to the third character when the container holding the text is in a hovered state.

Applying Gradients To Text

Here’s a fun bit of styling:

.text {
  background: repeating-linear-gradient(
    180deg, 
    rgb(224, 236, 236), 
    rgb(224, 236, 236) 5px, 
    rgb(92, 198, 162) 5px, 
    rgb(92, 198, 162) 6px);
  background-clip: text;
  color: transparent; /* to show the background underneath */
  background-size: 20% 20%;
}

How often do you find yourself using repeating gradients in your work? The fun part, though, is what comes after it. See, we’re setting a transparent color on the text and that allows the repeating-linear-gradient() to show through it. But since text is a box like everything else in CSS, we clip the background at the text itself to make it look like the text is cut out of the gradient.

See the Pen A gradient text (Note: View in Safari or Chrome) [forked] by Preethi Sam.

Pretty neat, right? Now, it looks like our text characters have a striped pattern painted on them.

Animating The Gradient

This is where we take the same animated gradient concept covered in other tutorials and work it into what we’re doing here. For that, we’ll first register some of the repeating-linear-gradient() values as custom properties. But unlike the other implementations, ours is a bit more complex because we will animate several values rather than, say, updating the hue.

Instead, we’re animating two colors, a length, and an angle.

@property --c1 {
  syntax: "<color>";
  inherits: false;
  initial-value: rgb(224, 236, 236);
}
@property --c2 {
  syntax: "<color>";
  inherits: false;
  initial-value: rgb(92, 198, 162);
}
@property --l {
  syntax: "<length> | <percentage>";
  inherits: false;
  initial-value: 5px;
}
@property --angle {
  syntax: "<angle>";
  inherits: false;
  initial-value: 180deg;
}

.text {
  background: repeating-linear-gradient(
    var(--angle), 
    var(--c1), 
    var(--c1) 5px, 
    var(--c2) var(--l), 
    var(--c2) 6px);
}

We want to update the values of our registered custom properties when the container that holds the text is hovered or in focus. All that takes is re-declaring the properties with the updated values.

.text-container:has(:hover, :focus) .text {
  --c1: pink;
  --c2: transparent;  
  --l: 100%;
  --angle: 90deg;

  background-size: 50% 100%;
  transform:  translateY(-2lh);
}

To be super clear about what’s happening, these are the custom properties and values that update on hover:

  • --c1: Starts with a color value of rgb(224, 236, 236) and updates to pink.
  • --c2: Starts with a color value of rgb(92, 198, 162) and updates to transparent.
  • --l: Starts with length value 5px and updates to 100%.
  • --a: Starts with an angle value of 180deg and updates to 90deg.

So, the two colors used in the gradient transition into other colors while the overall size of the gradient increases and rotates. It’s as though we’re choreographing a short dance routine for the gradient.

Refining The Transition

All the while, the .text element containing the characters slides up to reveal one character at a time. The only thing is that we have to tell CSS what will transition on hover, which we do directly on the .text element:

.text {
  transition: --l, --angle, --c1, --c2, background-size, transform 2.4s ease-in-out;
  transition-duration: 2s; 
}

Yes, I could just as easily have used the all keyword to select all of the transitioning properties. But I prefer taking the extra step of declaring each one individually. It’s a little habit to keep the browser from having to watch for too many things, which could slow things down even a smidge.

Final Demo

Here’s the final outcome once again:

See the Pen Text animation with @property [forked] by Preethi Sam.

I hope this little exercise not only demonstrates the sorts of fancy things we can make with CSS custom properties but also helps clarify the differences between custom properties and standard variables. Standard variables are excellent placeholders for more maintainable code (and a few fancy tricks of their own) but when you find yourself needing to update one value in a property that supports multiple values — such as colors in a gradient — the @property at-rule is where it’s at because it lets us define variables with a custom specification that sets the variable’s syntax, initial value, and inheritance behavior.

When we get to amend values individually and independently with a promise of animation, it both helps streamline the code and opens up new possibilities for designing elaborate animations with relatively nimble code.

That’s why @property is a useful CSS standard to keep in mind and keep ready to use when you are thinking about animations that involve isolated value changes.

Further Reading On SmashingMag

]]>
hello@smashingmagazine.com (Preethi Sam)
<![CDATA[The Modern Guide For Making CSS Shapes]]> https://smashingmagazine.com/2024/05/modern-guide-making-css-shapes/ https://smashingmagazine.com/2024/05/modern-guide-making-css-shapes/ Fri, 10 May 2024 13:00:00 GMT You have for sure googled “how to create [shape_name] with CSS” at least once in your front-end career if it’s not something you already have bookmarked. And the number of articles and demos you will find out there is endless.

Good, right? Copy that code and drop it into the ol’ stylesheet. Ship it!

The problem is that you don’t understand how the copied code works. Sure, it got the job done, but many of the most widely used CSS shape snippets are often dated and rely on things like magic numbers to get the shapes just right. So, the next time you go into the code needing to make a change to it, it either makes little sense or is inflexible to the point that you need an entirely new solution.

So, here it is, your one-stop modern guide for how to create shapes in CSS! We are going to explore the most common CSS shapes while highlighting different CSS tricks and techniques that you can easily re-purpose for any kind of shape. The goal is not to learn how to create specific shapes but rather to understand the modern tricks that allow you to create any kind of shape you want.

Table of Contents

You can jump directly to the topic you’re interested in to find relevant shapes or browse the complete list. Enjoy!

Why Not SVG?

I get asked this question often, and my answer is always the same: Use SVG if you can! I have nothing against SVG. It’s just another approach for creating shapes using another syntax with another set of considerations. If SVG was my expertise, then I would be writing about that instead!

CSS is my field of expertise, so that’s the approach we’re covering for drawing shapes with code. Choosing CSS or SVG is typically a matter of choice. There may very well be a good reason why SVG is a better fit for your specific needs.

Many times, CSS will be your best bet for decorative things or when you’re working with a specific element in the markup that contains real content to be styled. Ultimately, though, you will need to consider what your project’s requirements are and decide whether a CSS shape is really what you are looking for.

Your First Resource

Before we start digging into code, please spend a few minutes over at my CSS Shape website. You will find many examples of CSS-only shapes. This is an ever-growing collection that I regularly maintain with new shapes and techniques. Bookmark it and use it as a reference as we make our way through this guide.

Is it fairly easy to modify and tweak the CSS for those shapes?

Yes! The CSS for each and every shape is optimized to be as flexible and efficient as possible. The CSS typically targets a single HTML element to prevent you from having to touch too much markup besides dropping the element on the page. Additionally, I make liberal use of CSS variables that allow you to modify things easily for your needs.

Most of you don't have time to grasp all the techniques and tricks to create different shapes, so an online resource with ready-to-use snippets of code can be a lifesaver!

Clipping Shapes In CSS

The CSS clip-path property — and its polygon() function — is what we commonly reach for when creating CSS Shapes. Through the creation of common CSS shapes, we will learn a few tricks that can help you create other shapes easily.

Hexagons

Let’s start with one of the easiest shapes; the hexagon. We first define the shape’s dimensions, then provide the coordinates for the six points and we are done.

.hexagon {
  width: 200px;
  aspect-ratio: 0.866; 
  clip-path: polygon(
    0% 25%,
    0% 75%,
    50% 100%, 
    100% 75%, 
    100% 25%, 
    50% 0%);
}

We’re basically drawing the shape of a diamond where two of the points are set way outside the bounds of the hexagon we’re trying to make. This is perhaps the very first lesson for drawing CSS shapes: Allow yourself to think outside the box — or at least the shape’s boundaries.

Look how much simpler the code already looks:

.hexagon {
  width: 200px;
  aspect-ratio: cos(30deg); 
  clip-path: polygon(
    -50% 50%,
    50% 100%,
    150% 50%,
    50% 0
  );
}

Did you notice that I updated the aspect-ratio property in there? I’m using a trigonometric function, cos(), to replace the magic number 0.866. The exact value of the ratio is equal to cos(30deg) (or sin(60deg)). Besides, cos(30deg) is a lot easier to remember than 0.866.

Here’s something fun we can do: swap the X and Y coordinate values. In other words, let’s change the polygon() coordinates from this pattern:

clip-path: polygon(X1 Y1, X2 Y2, ..., Xn Yn)

…to this, where the Y values come before the X values:

clip-path: polygon(Y1 X1, Y2 X2, ..., Yn Xn)

What we get is a new variation of the hexagon:

I know that visualizing the shape with outside points can be somewhat difficult because we’re practically turning the concept of clipping on its head. But with some practice, you get used to this mental model and develop muscle memory for it.

Notice that the CSS is remarkably similar to what we used to create a hexagon:

.octagon {
  width: 200px;  
  aspect-ratio: 1;  
  --o: calc(50% * tan(-22.5deg));
  clip-path: polygon(
    var(--o) 50%,
    50% var(--o),
    calc(100% - var(--o)) 50%,
    50% calc(100% - var(--o))
  );
}

Except for the small trigonometric formula, the structure of the code is identical to the last hexagon shape — set the shape’s dimensions, then clip the points. And notice how I saved the math calculation as a CSS variable to avoid repeating that code.

If math isn’t really your thing — and that’s totally fine! — remember that the formulas are simply one part of the puzzle. There’s no need to go back to your high school geometry textbooks. You can always find the formulas you need for specific shapes in my online collection. Again, that collection is your first resource for creating CSS shapes!

And, of course, we can apply this shape to an <img> element as easily as we can a <div>:

It may sound impossible to make a star out of only five points, but it’s perfectly possible, and the trick is how the points inside polygon() are ordered. If we were to draw a star with pencil on paper in a single continuous line, we would follow the following order:

It’s the same way we used to draw stars as kids — and it fits perfectly in CSS with polygon()! This is another hidden trick about clip-path with polygon(), and it leads to another key lesson for drawing CSS shapes: the lines we establish can intersect. Again, we’re sort of turning a concept on its head, even if it’s a pattern we all grew up making by hand.

Here’s how those five points translate to CSS:

.star {
  width: 200px;
aspect-ratio: 1; clip-path: polygon(50% 0, /* (1) */ calc(50%*(1 + sin(.4turn))) calc(50%*(1 - cos(.4turn))), /* (2) */ calc(50%*(1 - sin(.2turn))) calc(50%*(1 - cos(.2turn))), /* (3) */ calc(50%*(1 + sin(.2turn))) calc(50%*(1 - cos(.2turn))), /* (4) */ calc(50%*(1 - sin(.4turn))) calc(50%*(1 - cos(.4turn))) /* (5) */ ); }

The funny thing is that starbursts are basically the exact same thing as polygons, just with half the points that we can move inward.

Figure 6.

I often advise people to use my online generators for shapes like these because the clip-path coordinates can get tricky to write and calculate by hand.

That said, I really believe it’s still a very good idea to understand how the coordinates are calculated and how they affect the overall shape. I have an entire article on the topic for you to learn the nuances of calculating coordinates.

Parallelograms & Trapezoids

Another common shape we always build is a rectangle shape where we have one or two slanted sides. They have a lot of names depending on the final result (e.g., parallelogram, trapezoid, skewed rectangle, and so on), but all of them are built using the same CSS technique.

First, we start by creating a basic rectangle by linking the four corner points together:

clip-path: polygon(0 0, 100% 0, 100% 100%, 0 100%)

This code produces nothing because our element is already a rectangle. Also, note that 0 and 100% are the only values we’re using.

Next, offset some values to get the shape you want. Let’s say our offset needs to be equal to 10px. If the value is 0, we update it with 10px, and if it’s 100% we update it with calc(100% - 10px). As simple as that!

But which value do I need to update and when?

Try and see! Open your browser’s developer tools and update the values in real-time to see how the shape changes, and you will understand what points you need to update. I would lie if I told you that I write all the shapes from memory without making any mistakes. In most cases, I start with the basic rectangle, and I add or update points until I get the shape I want. Try this as a small homework exercise and create the shapes in Figure 11 by yourself. You can still find all the correct code in my online collection for reference.

If you want more CSS tricks around the clip-path property, check my article “CSS Tricks To Master The clip-path Property” which is a good follow-up to this section.

Masking Shapes In CSS

We just worked with a number of shapes that required us to figure out a number of points and clip-path by plotting their coordinates in a polygon(). In this section, we will cover circular and curvy shapes while introducing the other property you will use the most when creating CSS shapes: the mask property.

Like the previous section, we will create some shapes while highlighting the main tricks you need to know. Don’t forget that the goal is not to learn how to create specific shapes but to learn the tricks that allow you to create any kind of shape.

Circles & Holes

When talking about the mask property, gradients are certain to come up. We can, for example, “cut” (but really “mask”) a circular hole out of an element with a radial-gradient:

mask: radial-gradient(50px, #0000 98%, #000);

Why aren’t we using a simple background instead? The mask property allows us more flexibility, like using any color we want and applying the effect on a variety of other elements, such as <img>. If the color and flexible utility aren’t a big deal, then you can certainly reach for the background property instead of cutting a hole.

Here’s the mask working on both a <div> and <img>:

Once again, it’s all about CSS masks and gradients. In the following articles, I provide you with examples and recipes for many different possibilities:

Be sure to make it to the end of the second article to see how this technique can be used as decorative background patterns.

This time, we are going to introduce another technique which is “composition”. It’s an operation we perform between two gradient layers. We either use mask-composite to define it, or we declare the values on the mask property.

The figure below illustrates the gradient configuration and the composition between each layer.

We start with a radial-gradient to create a full circle shape. Then we use a conic-gradient to create the shape below it. Between the two gradients, we perform an “intersect” composition to get the unclosed circle. Then we tack on two more radial gradients to the mask to get those nice rounded endpoints on the unclosed circle. This time we consider the default composition, “add”.

Gradients aren’t something new as we use them a lot with the background property but “composition” is the new concept I want you to keep in mind. It’s a very handy one that unlocks a lot of possibilities.

Ready for the CSS?

.arc {
  --b: 40px; /* border thickness */
  --a: 240deg; /* progression */
--_g:/var(--b) var(--b) radial-gradient(50% 50%,#000 98%,#0000) no-repeat; mask: top var(--_g), calc(50% + 50% sin(var(--a))) calc(50% - 50% cos(var(--a))) var(--_g), conic-gradient(#000 var(--a), #0000 0) intersect, radial-gradient(50% 50%, #0000 calc(100% - var(--b)), #000 0 98%, #0000) }

We could get clever and use a pseudo-element for the shape that’s positioned behind the set of panels, but that introduces more complexity and fixed values than we ought to have. Instead, we can continue using CSS masks to get the perfect shape with a minimal amount of reusable code.

It’s not really the rounded top edges that are difficult to pull off, but the bottom portion that curves inwards instead of rounding in like the top. And even then, we already know the secret sauce: using CSS masks by combining gradients that reveal just the parts we want.

We start by adding a border around the element — excluding the bottom edge — and applying a border-radius on the top-left and top-right corners.

.tab {
  --r: 40px; /* radius size */

border: var(--r) solid #0000; / transparent black /
border-bottom: 0;
border-radius: calc(2  var(--r)) calc(2  var(--r)) 0 0;
} 

Next, we add the first mask layer. We only want to show the padding area (i.e., the red area highlighted in Figure 10).

mask: linear-gradient(#000 0 0) padding-box;

Let’s add two more gradients, both radial, to show those bottom curves.

mask: 
  radial-gradient(100% 100% at 0 0, #0000 98%, #000) 0 100% / var(--r) var(--r), 
  radial-gradient(100% 100% at 100% 0, #0000 98%, #000) 100% 100% / var(--r) var(--r), 
  linear-gradient(#000 0 0) padding-box;

Here is how the full code comes together:

.tab {
  --r: 40px; /* control the radius */

border: var(--r) solid #0000;
border-bottom: 0;
border-radius: calc(2  var(--r)) calc(2  var(--r)) 0 0;
mask:
radial-gradient(100% 100% at 0 0, #0000 98%, #000) 0 100% / var(--r) var(--r),
radial-gradient(100% 100% at 100% 0, #0000 98%, #000) 100% 100% / var(--r) var(--r),
linear-gradient(#000 0 0) padding-box;
mask-repeat: no-repeat;
background: linear-gradient(60deg, #BD5532, #601848) border-box;
} 

As usual, all it takes is one variable to control the shape. Let’s zero-in on the border-radius declaration for a moment:

border-radius: calc(2 * var(--r)) calc(2 * var(--r)) 0 0;

Notice that the shape’s rounded top edges are equal to two times the radius (--r) value. If you’re wondering why we need a calculation here at all, it’s because we have a transparent border hanging out there, and we need to double the radius to account for it. The radius of the blue areas highlighted in Figure 13 is equal to 2 * R while the red area highlighted in the same figure is equal to 2 * R - R, or simply R.

We can actually optimize the code so that we only need two gradients — one linear and one radial — instead of three. I’ll drop that into the following demo for you to pick apart. Can you figure out how we were able to eliminate one of the gradients?

I’ll throw in two additional variations for you to investigate:

These aren’t tabs at all but tooltips! We can absolutely use the exact same masking technique we used to create the tabs for these shapes. Notice how the curves that go inward are consistent in each shape, no matter if they are positioned on the left, right, or both.

You can always find the code over at my online collection if you want to reference it.

More CSS Shapes

At this point, we’ve seen the main tricks to create CSS shapes. You will rely on mask and gradients if you have curves and rounded parts or clip-path when there are no curves. It sounds simple but there’s still more to learn, so I am going to provide a few more common shapes for you to explore.

Instead of going into a detailed explanation of the shapes in this section, I’m going to give you the recipes for how to make them and all of the ingredients you need to make it happen. In fact, I have written other articles that are directly related to everything we are about to cover and will link them up so that you have guides you can reference in your work.

Triangles

A triangle is likely the first shape that you will ever need. They’re used in lots of places, from play buttons for videos, to decorative icons in links, to active state indicators, to open/close toggles in accordions, to… the list goes on.

Creating a triangle shape is as simple as using a 3-point polygon in addition to defining the size:

.triangle {
  width: 200px;
  aspect-ratio: 1;
  clip-path: polygon(50% 0, 100% 100%, 0 100%);
}

But we can get even further by adding more points to have border-only variations:

We can cut all the corners or just specific ones. We can make circular cuts or sharp ones. We can even create an outline of the overall shape. Take a look at my online generator to play with the code, and check out my full article on the topic where I am detailing all the different cases.

Cut-Out Shapes

In addition to cutting corners, we can also cut a shape out of a rectangle. They are also called inverted shapes.

The technique is all about setting the CSS clip-path property with the shape’s coordinates in the polygon() function. So, technically, this is something you already know, thanks to the examples we’ve looked at throughout this guide.

I hope you see the pattern now: sometimes, we’re clipping an element or masking portions of it. The fact that we can sort of “carve” into things this way using polygon() coordinates and gradients opens up so many possibilities that would have required clever workarounds and super-specific code in years past.

See my article “How to Create a Section Divider Using CSS” on the freeCodeCamp blog for a deep dive into the concepts, which we’ve also covered here quite extensively already in earlier sections.

Floral Shapes

We’ve created circles. We’ve made wave shapes. Let’s combine those two ideas together to create floral shapes.

These shapes are pretty cool on their own. But like a few of the other shapes we’ve covered, this one works extremely well with images. If you need something fancier than the typical box, then masking the edges can come off like a custom-framed photo.

Here is a demo where I am using such shapes to create a fancy hover effect:

See the Pen Fancy Pop Out hover effect! by Temani Afif.

There’s a lot of math involved with this, specifically trigonometric functions. I have a two-part series that gets into the weeds if you’re interested in that side of things:

As always, remember that my online collection is your Number One resource for all things related to CSS shapes. The math has already been worked out for your convenience, but you also have the references you need to understand how it works under the hood.

Conclusion

I hope you see CSS Shapes differently now as a result of reading this comprehensive guide. We covered a few shapes, but really, it’s hundreds upon hundreds of shapes because you see how flexible they are to configure into a slew of variations.

At the end of the day, all of the shapes use some combination of different CSS concepts such as clipping, masking, composition, gradients, CSS variables, and so on. Not to mention a few hidden tricks like the one related to the polygon() function:

  • It accepts points outside the [0% 100%] range.
  • Switching axes is a solid approach for creating shape variations.
  • The lines we establish can intersect.

It’s not that many things, right? We looked at each of these in great detail and then whipped through the shapes to demonstrate how the concepts come together. It’s not so much about memorizing snippets than it is thoroughly understanding how CSS works and leveraging its features to produce any number of things, like shapes.

Don’t forget to bookmark my CSS Shape website and use it as a reference as well as a quick stop to get a specific shape you need for a project. I avoid re-inventing the wheel in my work, and the online collection is your wheel for snagging shapes made with pure CSS.

Please also use it as inspiration for your own shape-shifting experiments. And post a comment if you think of a shape that would be a nice addition to the collection.

References

]]>
hello@smashingmagazine.com (Temani Afif)
<![CDATA[The Forensics Of React Server Components (RSCs)]]> https://smashingmagazine.com/2024/05/forensics-react-server-components/ https://smashingmagazine.com/2024/05/forensics-react-server-components/ Thu, 09 May 2024 13:00:00 GMT This article is a sponsored by Sentry.io

In this article, we’re going to look deeply at React Server Components (RSCs). They are the latest innovation in React’s ecosystem, leveraging both server-side and client-side rendering as well as streaming HTML to deliver content as fast as possible.

We will get really nerdy to get a full understanding of how RFCs fit into the React picture, the level of control they offer over the rendering lifecycle of components, and what page loads look like with RFCs in place.

But before we dive into all of that, I think it’s worth looking back at how React has rendered websites up until this point to set the context for why we need RFCs in the first place.

The Early Days: React Client-Side Rendering

The first React apps were rendered on the client side, i.e., in the browser. As developers, we wrote apps with JavaScript classes as components and packaged everything up using bundlers, like Webpack, in a nicely compiled and tree-shaken heap of code ready to ship in a production environment.

The HTML that returned from the server contained a few things, including:

  • An HTML document with metadata in the <head> and a blank <div> in the <body> used as a hook to inject the app into the DOM;
  • JavaScript resources containing React’s core code and the actual code for the web app, which would generate the user interface and populate the app inside of the empty <div>.

A web app under this process is only fully interactive once JavaScript has fully completed its operations. You can probably already see the tension here that comes with an improved developer experience (DX) that negatively impacts the user experience (UX).

The truth is that there were (and are) pros and cons to CSR in React. Looking at the positives, web applications delivered smooth, quick transitions that reduced the overall time it took to load a page, thanks to reactive components that update with user interactions without triggering page refreshes. CSR lightens the server load and allows us to serve assets from speedy content delivery networks (CDNs) capable of delivering content to users from a server location geographically closer to the user for even more optimized page loads.

There are also not-so-great consequences that come with CSR, most notably perhaps that components could fetch data independently, leading to waterfall network requests that dramatically slow things down. This may sound like a minor nuisance on the UX side of things, but the damage can actually be quite large on a human level. Eric Bailey’s “Modern Health, frameworks, performance, and harm” should be a cautionary tale for all CSR work.

Other negative CSR consequences are not quite as severe but still lead to damage. For example, it used to be that an HTML document containing nothing but metadata and an empty <div> was illegible to search engine crawlers that never get the fully-rendered experience. While that’s solved today, the SEO hit at the time was an anchor on company sites that rely on search engine traffic to generate revenue.

The Shift: Server-Side Rendering (SSR)

Something needed to change. CSR presented developers with a powerful new approach for constructing speedy, interactive interfaces, but users everywhere were inundated with blank screens and loading indicators to get there. The solution was to move the rendering experience from the client to the server. I know it sounds funny that we needed to improve something by going back to the way it was before.

So, yes, React gained server-side rendering (SSR) capabilities. At one point, SSR was such a topic in the React community that it had a moment in the spotlight. The move to SSR brought significant changes to app development, specifically in how it influenced React behavior and how content could be delivered by way of servers instead of browsers.

Addressing CSR Limitations

Instead of sending a blank HTML document with SSR, we rendered the initial HTML on the server and sent it to the browser. The browser was able to immediately start displaying the content without needing to show a loading indicator. This significantly improves the First Contentful Paint (FCP) performance metric in Web Vitals.

Server-side rendering also fixed the SEO issues that came with CSR. Since the crawlers received the content of our websites directly, they were then able to index it right away. The data fetching that happens initially also takes place on the server, which is a plus because it’s closer to the data source and can eliminate fetch waterfalls if done properly.

Hydration

SSR has its own complexities. For React to make the static HTML received from the server interactive, it needs to hydrate it. Hydration is the process that happens when React reconstructs its Virtual Document Object Model (DOM) on the client side based on what was in the DOM of the initial HTML.

Note: React maintains its own Virtual DOM because it’s faster to figure out updates on it instead of the actual DOM. It synchronizes the actual DOM with the Virtual DOM when it needs to update the UI but performs the diffing algorithm on the Virtual DOM.

We now have two flavors of Reacts:

  1. A server-side flavor that knows how to render static HTML from our component tree,
  2. A client-side flavor that knows how to make the page interactive.

We’re still shipping React and code for the app to the browser because — in order to hydrate the initial HTML — React needs the same components on the client side that were used on the server. During hydration, React performs a process called reconciliation in which it compares the server-rendered DOM with the client-rendered DOM and tries to identify differences between the two. If there are differences between the two DOMs, React attempts to fix them by rehydrating the component tree and updating the component hierarchy to match the server-rendered structure. And if there are still inconsistencies that cannot be resolved, React will throw errors to indicate the problem. This problem is commonly known as a hydration error.

SSR Drawbacks

SSR is not a silver bullet solution that addresses CSR limitations. SSR comes with its own drawbacks. Since we moved the initial HTML rendering and data fetching to the server, those servers are now experiencing a much greater load than when we loaded everything on the client.

Remember when I mentioned that SSR generally improves the FCP performance metric? That may be true, but the Time to First Byte (TTFB) performance metric took a negative hit with SSR. The browser literally has to wait for the server to fetch the data it needs, generate the initial HTML, and send the first byte. And while TTFB is not a Core Web Vital metric in itself, it influences the metrics. A negative TTFB leads to negative Core Web Vitals metrics.

Another drawback of SSR is that the entire page is unresponsive until client-side React has finished hydrating it. Interactive elements cannot listen and “react” to user interactions before React hydrates them, i.e., React attaches the intended event listeners to them. The hydration process is typically fast, but the internet connection and hardware capabilities of the device in use can slow down rendering by a noticeable amount.

The Present: A Hybrid Approach

So far, we have covered two different flavors of React rendering: CSR and SSR. While the two were attempts to improve one another, we now get the best of both worlds, so to speak, as SSR has branched into three additional React flavors that offer a hybrid approach in hopes of reducing the limitations that come with CSR and SSR.

We’ll look at the first two — static site generation and incremental static regeneration — before jumping into an entire discussion on React Server Components, the third flavor.

Static Site Generation (SSG)

Instead of regenerating the same HTML code on every request, we came up with SSG. This React flavor compiles and builds the entire app at build time, generating static (as in vanilla HTML and CSS) files that are, in turn, hosted on a speedy CDN.

As you might suspect, this hybrid approach to rendering is a nice fit for smaller projects where the content doesn’t change much, like a marketing site or a personal blog, as opposed to larger projects where content may change with user interactions, like an e-commerce site.

SSG reduces the burden on the server while improving performance metrics related to TTFB because the server no longer has to perform heavy, expensive tasks for re-rendering the page.

Incremental Static Regeneration (ISR)

One SSG drawback is having to rebuild all of the app’s code when a content change is needed. The content is set in stone — being static and all — and there’s no way to change just one part of it without rebuilding the whole thing.

The Next.js team created the second hybrid flavor of React that addresses the drawback of complete SSG rebuilds: incremental static regeneration (ISR). The name says a lot about the approach in that ISR only rebuilds what’s needed instead of the entire thing. We generate the “initial version” of the page statically during build time but are also able to rebuild any page containing stale data after a user lands on it (i.e., the server request triggers the data check).

From that point on, the server will serve new versions of that page statically in increments when needed. That makes ISR a hybrid approach that is neatly positioned between SSG and traditional SSR.

At the same time, ISR does not address the “stale content” symptom, where users may visit a page before it has finished being generated. Unlike SSG, ISR needs an actual server to regenerate individual pages in response to a user’s browser making a server request. That means we lose the valuable ability to deploy ISR-based apps on a CDN for optimized asset delivery.

The Future: React Server Components

Up until this point, we’ve juggled between CSR, SSR, SSG, and ISR approaches, where all make some sort of trade-off, negatively affecting performance, development complexity, and user experience. Newly introduced React Server Components (RSC) aim to address most of these drawbacks by allowing us — the developer — to choose the right rendering strategy for each individual React component.

RSCs can significantly reduce the amount of JavaScript shipped to the client since we can selectively decide which ones to serve statically on the server and which render on the client side. There’s a lot more control and flexibility for striking the right balance for your particular project.

Note: It’s important to keep in mind that as we adopt more advanced architectures, like RSCs, monitoring solutions become invaluable. Sentry offers robust performance monitoring and error-tracking capabilities that help you keep an eye on the real-world performance of your RSC-powered application. Sentry also helps you gain insights into how your releases are performing and how stable they are, which is yet another crucial feature to have while migrating your existing applications to RSCs. Implementing Sentry in an RSC-enabled framework like Next.js is as easy as running a single terminal command.

But what exactly is an RSC? Let’s pick one apart to see how it works under the hood.

The Anatomy of React Server Components

This new approach introduces two types of rendering components: Server Components and Client Components. The differences between these two are not how they function but where they execute and the environments they’re designed for. At the time of this writing, the only way to use RSCs is through React frameworks. And at the moment, there are only three frameworks that support them: Next.js, Gatsby, and RedwoodJS.

Server Components

Server Components are designed to be executed on the server, and their code is never shipped to the browser. The HTML output and any props they might be accepting are the only pieces that are served. This approach has multiple performance benefits and user experience enhancements:

  • Server Components allow for large dependencies to remain on the server side.
    Imagine using a large library for a component. If you’re executing the component on the client side, it means that you’re also shipping the full library to the browser. With Server Components, you’re only taking the static HTML output and avoiding having to ship any JavaScript to the browser. Server Components are truly static, and they remove the whole hydration step.
  • Server Components are located much closer to the data sources — e.g., databases or file systems — they need to generate code.
    They also leverage the server’s computational power to speed up compute-intensive rendering tasks and send only the generated results back to the client. They are also generated in a single pass, which avoids request waterfalls and HTTP round trips.
  • Server Components safely keep sensitive data and logic away from the browser.
    That’s thanks to the fact that personal tokens and API keys are executed on a secure server rather than the client.
  • The rendering results can be cached and reused between subsequent requests and even across different sessions.
    This significantly reduces rendering time, as well as the overall amount of data that is fetched for each request.

This architecture also makes use of HTML streaming, which means the server defers generating HTML for specific components and instead renders a fallback element in their place while it works on sending back the generated HTML. Streaming Server Components wrap components in <Suspense> tags that provide a fallback value. The implementing framework uses the fallback initially but streams the newly generated content when it‘s ready. We’ll talk more about streaming, but let’s first look at Client Components and compare them to Server Components.

Client Components

Client Components are the components we already know and love. They’re executed on the client side. Because of this, Client Components are capable of handling user interactions and have access to the browser APIs like localStorage and geolocation.

The term “Client Component” doesn’t describe anything new; they merely are given the label to help distinguish the “old” CSR components from Server Components. Client Components are defined by a "use client" directive at the top of their files.

"use client"
export default function LikeButton() {
  const likePost = () => {
    // ...
  }
  return (
    <button onClick={likePost}>Like</button>
  )
}

In Next.js, all components are Server Components by default. That’s why we need to explicitly define our Client Components with "use client". There’s also a "use server" directive, but it’s used for Server Actions (which are RPC-like actions that invoked from the client, but executed on the server). You don’t use it to define your Server Components.

You might (rightfully) assume that Client Components are only rendered on the client, but Next.js renders Client Components on the server to generate the initial HTML. As a result, browsers can immediately start rendering them and then perform hydration later.

The Relationship Between Server Components and Client Components

Client Components can only explicitly import other Client Components. In other words, we’re unable to import a Server Component into a Client Component because of re-rendering issues. But we can have Server Components in a Client Component’s subtree — only passed through the children prop. Since Client Components live in the browser and they handle user interactions or define their own state, they get to re-render often. When a Client Component re-renders, so will its subtree. But if its subtree contains Server Components, how would they re-render? They don’t live on the client side. That’s why the React team put that limitation in place.

But hold on! We actually can import Server Components into Client Components. It’s just not a direct one-to-one relationship because the Server Component will be converted into a Client Component. If you’re using server APIs that you can’t use in the browser, you’ll get an error; if not — you’ll have a Server Component whose code gets “leaked” to the browser.

This is an incredibly important nuance to keep in mind as you work with RSCs.

The Rendering Lifecycle

Here’s the order of operations that Next.js takes to stream contents:

  1. The app router matches the page’s URL to a Server Component, builds the component tree, and instructs the server-side React to render that Server Component and all of its children components.
  2. During render, React generates an “RSC Payload”. The RSC Payload informs Next.js about the page and what to expect in return, as well as what to fall back to during a <Suspense>.
  3. If React encounters a suspended component, it pauses rendering that subtree and uses the suspended component’s fallback value.
  4. When React loops through the last static component, Next.js prepares the generated HTML and the RSC Payload before streaming it back to the client through one or multiple chunks.
  5. The client-side React then uses the instructions it has for the RSC Payload and client-side components to render the UI. It also hydrates each Client Component as they load.
  6. The server streams in the suspended Server Components as they become available as an RSC Payload. Children of Client Components are also hydrated at this time if the suspended component contains any.

We will look at the RSC rendering lifecycle from the browser’s perspective momentarily. For now, the following figure illustrates the outlined steps we covered.

We’ll see this operation flow from the browser’s perspective in just a bit.

RSC Payload

The RSC payload is a special data format that the server generates as it renders the component tree, and it includes the following:

  • The rendered HTML,
  • Placeholders where the Client Components should be rendered,
  • References to the Client Components’ JavaScript files,
  • Instructions on which JavaScript files it should invoke,
  • Any props passed from a Server Component to a Client Component.

There’s no reason to worry much about the RSC payload, but it’s worth understanding what exactly the RSC payload contains. Let’s examine an example (truncated for brevity) from a demo app I created:

1:HL["/_next/static/media/c9a5bc6a7c948fb0-s.p.woff2","font",{"crossOrigin":"","type":"font/woff2"}]
2:HL["/_next/static/css/app/layout.css?v=1711137019097","style"]
0:"$L3"
4:HL["/_next/static/css/app/page.css?v=1711137019097","style"]
5:I["(app-pages-browser)/./node_modules/next/dist/client/components/app-router.js",["app-pages-internals","static/chunks/app-pages-internals.js"],""]
8:"$Sreact.suspense"
a:I["(app-pages-browser)/./node_modules/next/dist/client/components/layout-router.js",["app-pages-internals","static/chunks/app-pages-internals.js"],""]
b:I["(app-pages-browser)/./node_modules/next/dist/client/components/render-from-template-context.js",["app-pages-internals","static/chunks/app-pages-internals.js"],""]
d:I["(app-pages-browser)/./src/app/global-error.jsx",["app/global-error","static/chunks/app/global-error.js"],""]
f:I["(app-pages-browser)/./src/components/clearCart.js",["app/page","static/chunks/app/page.js"],"ClearCart"]
7:["$","main",null,{"className":"page_main__GlU4n","children":[["$","$Lf",null,{}],["$","$8",null,{"fallback":["$","p",null,{"children":"🌀 loading products..."}],"children":"$L10"}]]}]
c:[["$","meta","0",{"name":"viewport","content":"width=device-width, initial-scale=1"}]...
9:["$","p",null,{"children":["🛍️ ",3]}]
11:I["(app-pages-browser)/./src/components/addToCart.js",["app/page","static/chunks/app/page.js"],"AddToCart"]
10:["$","ul",null,{"children":[["$","li","1",{"children":["Gloves"," - $",20,["$...

To find this code in the demo app, open your browser’s developer tools at the Elements tab and look at the <script> tags at the bottom of the page. They’ll contain lines like:

self.__next_f.push([1,"PAYLOAD_STRING_HERE"]).

Every line from the snippet above is an individual RSC payload. You can see that each line starts with a number or a letter, followed by a colon, and then an array that’s sometimes prefixed with letters. We won’t get into too deep in detail as to what they mean, but in general:

  • HL payloads are called “hints” and link to specific resources like CSS and fonts.
  • I payloads are called “modules,” and they invoke specific scripts. This is how Client Components are being loaded as well. If the Client Component is part of the main bundle, it’ll execute. If it’s not (meaning it’s lazy-loaded), a fetcher script is added to the main bundle that fetches the component’s CSS and JavaScript files when it needs to be rendered. There’s going to be an I payload sent from the server that invokes the fetcher script when needed.
  • "$" payloads are DOM definitions generated for a certain Server Component. They are usually accompanied by actual static HTML streamed from the server. That’s what happens when a suspended component becomes ready to be rendered: the server generates its static HTML and RSC Payload and then streams both to the browser.
Streaming

Streaming allows us to progressively render the UI from the server. With RSCs, each component is capable of fetching its own data. Some components are fully static and ready to be sent immediately to the client, while others require more work before loading. Based on this, Next.js splits that work into multiple chunks and streams them to the browser as they become ready. So, when a user visits a page, the server invokes all Server Components, generates the initial HTML for the page (i.e., the page shell), replaces the “suspended” components’ contents with their fallbacks, and streams all of that through one or multiple chunks back to the client.

The server returns a Transfer-Encoding: chunked header that lets the browser know to expect streaming HTML. This prepares the browser for receiving multiple chunks of the document, rendering them as it receives them. We can actually see the header when opening Developer Tools at the Network tab. Trigger a refresh and click on the document request.

We can also debug the way Next.js sends the chunks in a terminal with the curl command:

curl -D - --raw localhost:3000 > chunked-response.txt

You probably see the pattern. For each chunk, the server responds with the chunk’s size before sending the chunk’s contents. Looking at the output, we can see that the server streamed the entire page in 16 different chunks. At the end, the server sends back a zero-sized chunk, indicating the end of the stream.

The first chunk starts with the <!DOCTYPE html> declaration. The second-to-last chunk, meanwhile, contains the closing </body> and </html> tags. So, we can see that the server streams the entire document from top to bottom, then pauses to wait for the suspended components, and finally, at the end, closes the body and HTML before it stops streaming.

Even though the server hasn’t completely finished streaming the document, the browser’s fault tolerance features allow it to draw and invoke whatever it has at the moment without waiting for the closing </body> and </html> tags.

Suspending Components

We learned from the render lifecycle that when a page is visited, Next.js matches the RSC component for that page and asks React to render its subtree in HTML. When React stumbles upon a suspended component (i.e., async function component), it grabs its fallback value from the <Suspense> component (or the loading.js file if it’s a Next.js route), renders that instead, then continues loading the other components. Meanwhile, the RSC invokes the async component in the background, which is streamed later as it finishes loading.

At this point, Next.js has returned a full page of static HTML that includes either the components themselves (rendered in static HTML) or their fallback values (if they’re suspended). It takes the static HTML and RSC payload and streams them back to the browser through one or multiple chunks.

As the suspended components finish loading, React generates HTML recursively while looking for other nested <Suspense> boundaries, generates their RSC payloads and then lets Next.js stream the HTML and RSC Payload back to the browser as new chunks. When the browser receives the new chunks, it has the HTML and RSC payload it needs and is ready to replace the fallback element from the DOM with the newly-streamed HTML. And so on.

In Figures 7 and 8, notice how the fallback elements have a unique ID in the form of B:0, B:1, and so on, while the actual components have a similar ID in a similar form: S:0 and S:1, and so on.

Along with the first chunk that contains a suspended component’s HTML, the server also ships an $RC function (i.e., completeBoundary from React’s source code) that knows how to find the B:0 fallback element in the DOM and replace it with the S:0 template it received from the server. That’s the “replacer” function that lets us see the component contents when they arrive in the browser.

The entire page eventually finishes loading, chunk by chunk.

Lazy-Loading Components

If a suspended Server Component contains a lazy-loaded Client Component, Next.js will also send an RSC payload chunk containing instructions on how to fetch and load the lazy-loaded component’s code. This represents a significant performance improvement because the page load isn’t dragged out by JavaScript, which might not even be loaded during that session.

At the time I’m writing this, the dynamic method to lazy-load a Client Component in a Server Component in Next.js does not work as you might expect. To effectively lazy-load a Client Component, put it in a “wrapper” Client Component that uses the dynamic method itself to lazy-load the actual Client Component. The wrapper will be turned into a script that fetches and loads the Client Component’s JavaScript and CSS files at the time they’re needed.

TL;DR

I know that’s a lot of plates spinning and pieces moving around at various times. What it boils down to, however, is that a page visit triggers Next.js to render as much HTML as it can, using the fallback values for any suspended components, and then sends that to the browser. Meanwhile, Next.js triggers the suspended async components and gets them formatted in HTML and contained in RSC Payloads that are streamed to the browser, one by one, along with an $RC script that knows how to swap things out.

The Page Load Timeline

By now, we should have a solid understanding of how RSCs work, how Next.js handles their rendering, and how all the pieces fit together. In this section, we’ll zoom in on what exactly happens when we visit an RSC page in the browser.

The Initial Load

As we mentioned in the TL;DR section above, when visiting a page, Next.js will render the initial HTML minus the suspended component and stream it to the browser as part of the first streaming chunks.

To see everything that happens during the page load, we’ll visit the “Performance” tab in Chrome DevTools and click on the “reload” button to reload the page and capture a profile. Here’s what that looks like:

When we zoom in at the very beginning, we can see the first “Parse HTML” span. That’s the server streaming the first chunks of the document to the browser. The browser has just received the initial HTML, which contains the page shell and a few links to resources like fonts, CSS files, and JavaScript. The browser starts to invoke the scripts.

After some time, we start to see the page’s first frames appear, along with the initial JavaScript scripts being loaded and hydration taking place. If you look at the frame closely, you’ll see that the whole page shell is rendered, and “loading” components are used in the place where there are suspended Server Components. You might notice that this takes place around 800ms, while the browser started to get the first HTML at 100ms. During those 700ms, the browser is continuously receiving chunks from the server.

Bear in mind that this is a Next.js demo app running locally in development mode, so it’s going to be slower than when it’s running in production mode.

The Suspended Component

Fast forward few seconds and we see another “Parse HTML” span in the page load timeline, but this one it indicates that a suspended Server Component finished loading and is being streamed to the browser.

We can also see that a lazy-loaded Client Component is discovered at the same time, and it contains CSS and JavaScript files that need to be fetched. These files weren’t part of the initial bundle because the component isn’t needed until later on; the code is split into their own files.

This way of code-splitting certainly improves the performance of the initial page load. It also makes sure that the Client Component’s code is shipped only if it’s needed. If the Server Component (which acts as the Client Component’s parent component) throws an error, then the Client Component does not load. It doesn’t make sense to load all of its code before we know whether it will load or not.

Figure 12 shows the DOMContentLoaded event is reported at the end of the page load timeline. And, just before that, we can see that the localhost HTTP request comes to an end. That means the server has likely sent the last zero-sized chunk, indicating to the client that the data is fully transferred and that the streaming communication can be closed.

The End Result

The main localhost HTTP request took around five seconds, but thanks to streaming, we began seeing page contents load much earlier than that. If this was a traditional SSR setup, we would likely be staring at a blank screen for those five seconds before anything arrives. On the other hand, if this was a traditional CSR setup, we would likely have shipped a lot more of JavaScript and put a heavy burden on both the browser and network.

This way, however, the app was fully interactive in those five seconds. We were able to navigate between pages and interact with Client Components that have loaded as part of the initial main bundle. This is a pure win from a user experience standpoint.

Conclusion

RSCs mark a significant evolution in the React ecosystem. They leverage the strengths of server-side and client-side rendering while embracing HTML streaming to speed up content delivery. This approach not only addresses the SEO and loading time issues we experience with CSR but also improves SSR by reducing server load, thus enhancing performance.

I’ve refactored the same RSC app I shared earlier so that it uses the Next.js Page router with SSR. The improvements in RSCs are significant:

Looking at these two reports I pulled from Sentry, we can see that streaming allows the page to start loading its resources before the actual request finishes. This significantly improves the Web Vitals metrics, which we see when comparing the two reports.

The conclusion: Users enjoy faster, more reactive interfaces with an architecture that relies on RSCs.

The RSC architecture introduces two new component types: Server Components and Client Components. This division helps React and the frameworks that rely on it — like Next.js — streamline content delivery while maintaining interactivity.

However, this setup also introduces new challenges in areas like state management, authentication, and component architecture. Exploring those challenges is a great topic for another blog post!

Despite these challenges, the benefits of RSCs present a compelling case for their adoption. We definitely will see guides published on how to address RSC’s challenges as they mature, but, in my opinion, they already look like the future of rendering practices in modern web development.

]]>
hello@smashingmagazine.com (Lazar Nikolov)
<![CDATA[How To Run UX Research Without Access To Users]]> https://smashingmagazine.com/2024/05/how-run-ux-research-without-access-users/ https://smashingmagazine.com/2024/05/how-run-ux-research-without-access-users/ Tue, 07 May 2024 12:00:00 GMT Smart Interface Design Patterns.]]> UX research without users isn’t research. We can shape design ideas with bias, assumptions, guesstimates, and even synthetic users, but it’s anything but UX research. Yet some of us might find ourselves in situations where we literally don’t have access to users — because of legal constraints, high costs, or perhaps users just don’t exist yet. What do we do then?

Luckily, there are some workarounds that help us better understand pain points and issues that users might have when using our products. This holds true even when stakeholders can’t give us time or resources to run actual research, or strict NDAs or privacy regulations prevent us from speaking to users.

Let’s explore how we can make UX research work when there is no or only limited access to users — and what we can do to make a strong case for UX research.

This article is part of our ongoing series on design patterns. It’s also an upcoming part of the 10h-video library on Smart Interface Design Patterns 🍣 and the upcoming live UX training as well. Use code BIRDIE to save 15% off.

Find Colleagues Who Are The Closest To Your Customers

When you don’t have access to users, I always try to establish a connection with colleagues who are the closest to our customers. Connect with people in the organization who speak with customers regularly, especially people in sales, customer success, support, and QA. Ultimately, you could convey your questions indirectly via them.

As Paul Adams noted, there has never been more overlap between designers and salespeople than today. Since many products are subscription-based, sales teams need to maintain relationships with customers over time. This requires a profound understanding of user needs — and meeting these needs well over time to keep retention and increase loyalty.

That’s where research comes in — and that’s exactly where the overlap between UX and sales comes in. In fact, it’s not surprising to find UX researchers sitting within marketing teams under the disguise of Customer Success teams, so whenever you can befriend colleagues from sales and Customer Success teams.

Gaining Insights Without Direct Access To Users

If you can’t get users to come to you, perhaps you could go where they are. You could ask to silently observe and shadow them at their workplace. You could listen in to customer calls and interview call center staff to uncover pain points that users have when interacting with your product. Analytics, CRM reports, and call center logs are also a great opportunity to gain valuable insights, and Google Trends can help you find product-related search queries.

To learn more about potential issues and user frustrations, also turn to search logs, Jira backlogs, and support tickets. Study reviews, discussions, and comments for your or your competitor’s product, and take a look at TrustPilot and app stores to map key themes and user sentiment. Or get active yourself and recruit users via tools like UserTesting, Maze, or UserInterviews.

These techniques won’t always work, but they can help you get off the ground. Beware of drawing big conclusions from very little research, though. You need multiple sources to reduce the impact of assumptions and biases — at a very minimum, you need five users to discover patterns.

Making A Strong Case For UX Research

Ironically, as H Locke noted, the stakeholders who can’t give you time or resources to talk to users often are the first to demand evidence to support your design work. Tap into it and explain what you need. Research doesn’t have to be time-consuming or expensive; ask for a small but steady commitment to gather evidence. Explain that you don’t need much to get started: 5 users × 30 minutes once a month might already be enough to make a positive change.

Sometimes, the reason why companies are reluctant to grant access to users is simply the lack of trust. They don’t want to disturb relationships with big clients, which are carefully maintained by the customer success team. They might feel that research is merely a technical detail that clients shouldn’t be bothered with.

Typically, if you work in B2B or enterprise, you won’t have direct access to users. This might be due to strict NDAs or privacy regulations, or perhaps the user group is very difficult to recruit (e.g., lawyers or doctors).

Show that you care about that relationship. Show the value that your work brings. Explain that design without research is merely guesswork and that designing without enough research is inherently flawed.

Once your impact becomes visible, it will be so much easier to gain access to users that seemed almost impossible initially.

Key Takeaways
  • Ask for reasons for no access to users: there might be none.
  • Find colleagues who are the closest to your customers.
  • Make friends with sales, customer success, support, QA.
  • Convey your questions indirectly via your colleagues.
  • If you can’t get users to come to you, go where they are.
  • Ask to observe or shadow customers at their workplace.
  • Listen in to customer calls and interview call center staff.
  • Gather insights from search logs, Jira backlog, and support tickets.
  • Map key themes and user sentiment on TrustPilot, AppStore, etc.
  • Recruit users via UserTesting, Maze, UserInterviews, etc.
  • Ask for small but steady commitments: 5 users × 30 mins, 1× month.
  • Avoid ad-hoc research: set up regular check-ins and timelines.
Useful Resources Meet Smart Interface Design Patterns

If you are interested in similar insights around UX, take a look at Smart Interface Design Patterns, our 10h-video course with 100s of practical examples from real-life projects — with a live UX training later this year. Everything from mega-dropdowns to complex enterprise tables — with 5 new segments added every year. Jump to a free preview.

Meet Smart Interface Design Patterns, our video course on interface design & UX.

100 design patterns & real-life examples.
10h-video course + live UX training. Free preview.

]]>
hello@smashingmagazine.com (Vitaly Friedman)
<![CDATA[How To Harness Mouse Interaction Data For Practical Machine Learning Solutions]]> https://smashingmagazine.com/2024/05/harness-mouse-interaction-data-machine-learning/ https://smashingmagazine.com/2024/05/harness-mouse-interaction-data-machine-learning/ Mon, 06 May 2024 10:00:00 GMT Mouse data is a subcategory of interaction data, a broad family of data about users generated as the immediate result of human interaction with computers. Its siblings from the same data family include logs of key presses or page visits. Businesses commonly rely on interaction data, including the mouse, to gather insights about their target audience. Unlike data that you could obtain more explicitly, let’s say via a survey, the advantage of interaction data is that it describes the actual behavior of actual people.

Collecting interaction data is completely unobtrusive since it can be obtained even as users go about their daily lives as usual, meaning it is a quantitative data source that scales very well. Once you start collecting it continuously as part of regular operation, you do not even need to do anything, and you’ll still have fresh, up-to-date data about users at your fingertips — potentially from your entire user base, without them even needing to know about it. Having data on specific users means that you can cater to their needs more accurately.

Of course, mouse data has its limitations. It simply cannot be obtained from people using touchscreens or those who rely on assistive tech. But if anything, that should not discourage us from using mouse data. It just illustrates that we should look for alternative methods that cater to the different ways that people interact with software. Among these, the mouse just happens to be very common.

When using the mouse, the mouse pointer is the de facto conduit for the user’s intent in a visual user interface. The mouse pointer is basically an extension of your arm that lets you interact with things in a virtual space that you cannot directly touch. Because of this, mouse interactions tend to be data-intensive. Even the simple mouse action of moving the pointer to an area and clicking it can yield a significant amount of data.

Mouse data is granular, even when compared with other sources of interaction data, such as the history of visited pages. However, with machine learning, it is possible to investigate jumbles of complicated data and uncover a variety of complex behavioral patterns. It can reveal more about the user holding the mouse without needing to provide any more information explicitly than normal.

For starters, let us venture into what kind of information can be obtained by processing mouse interaction data.

What Are Mouse Dynamics?

Mouse dynamics refer to the features that can be extracted from raw mouse data to describe the user’s operation of a mouse. Mouse data by itself corresponds with the simple mechanics of mouse controls. It consists of mouse events: the X and Y coordinates of the cursor on the screen, mouse button presses, and scrolling, each dated with a timestamp. Despite the innate simplicity of the mouse events themselves, the mouse dynamics using them as building blocks can capture user’s behavior from a diverse and emergently complex variety of perspectives.

If you are concerned about user privacy, as well you should be, mouse dynamics are also your friend. For the calculation of mouse dynamics to work, raw mouse data does not need to inherently contain any details about the actual meaning of the interaction. Without the context of what the user saw as they moved their pointer around and clicked, the data is quite safe and harmless.

Some examples of mouse dynamics include measuring the velocity and the acceleration at which the mouse cursor is moving or describing how direct or jittery the mouse trajectories are. Another example is whether the user presses and lets go of the primary mouse button quickly or whether there is a longer pause before they release their press. Four categories of over twenty base measures can be identified: temporal, spatial, spatial-temporal, and performance. Features do not need to be just metrics either, with other approaches using a time series of mouse events.

Temporal mouse dynamics:

  • Movement duration: The time between two clicks;
  • Response time: The time it takes to click something in response to a stimulus (e.g., from the moment when a page is displayed);
  • Initiation time: The time it takes from an initial stimulus for the cursor to start moving;
  • Pause time: The time measuring the cursor’s period of idleness.

Spatial mouse dynamics:

  • Distance: Length of the path traversed on the screen;
  • Straightness: The ratio between the traversed path and the optimal direct path;
  • Path deviation: Perpendicular distance of the traversed path from the optimal path;
  • Path crossing: Counted instances of the traversed and optimal path intersecting;
  • Jitter: The ratio of the traversed path length to its smoothed version;
  • Angle: The direction of movement;
  • Flips: Counted instances of change in direction;
  • Curvature: Change in angle over distance;
  • Inflection points: Counted instances of change in curvature.

Spatial-temporal mouse dynamics:

  • Velocity: Change of distance over time;
  • Acceleration: Change of velocity over time;
  • Jerk: Change of acceleration over time;
  • Snap: Change in jerk over time;
  • Angular velocity: Change in angle over time.

Performance mouse dynamics:

  • Clicks: The number of mouse button events pressing down or up;
  • Hold time: Time between mouse down and up events;
  • Click error: Length of the distance between the clicked point and the correct user task solution;
  • Time to click: Time between the hover event on the clicked point and the click event;
  • Scroll: Distance scrolled on the screen.

Note: For detailed coverage of varied mouse dynamics and their extraction, see the paper “Is mouse dynamics information credible for user behavior research? An empirical investigation.”

The spatial angular measures cited above are a good example of how the calculation of specific mouse dynamics can work. The direction angle of the movements between points A and B is the angle between the vector AB and the horizontal X axis. Then, the curvature angle in a sequence of points ABC is the angle between vectors AB and BC. Curvature distance can be defined as the ratio of the distance between points A and C and the perpendicular distance between point B and line AC. (Definitions sourced from the paper “An efficient user verification system via mouse movements.”)

Even individual features (e.g., mouse velocity by itself) can be delved into deeper. For example, on pages with a lot of scrolling, horizontal mouse velocity along the X-axis may be more indicative of something capturing the user’s attention than velocity calculated from direct point-to-point (Euclidean) distance in the screen's 2D space. The maximum velocity may be a good indicator of anomalies, such as user frustration, while the mean or median may tell you more about the user as a person.

From Data To Tangible Value

The introduction of mouse dynamics above, of course, is an oversimplification for illustrative purposes. Just by looking at the physical and geometrical measurements of users’ mouse trajectories, you cannot yet tell much about the user. That is the job of the machine learning algorithm. Even features that may seem intuitively useful to you as a human (see examples cited at the end of the previous section) can prove to be of low or zero value for a machine-learning algorithm.

Meanwhile, a deceptively generic or simplistic feature may turn out unexpectedly quite useful. This is why it is important to couple broad feature generation with a good feature selection method, narrowing the dimensionality of the model down to the mouse dynamics that help you achieve good accuracy without overfitting. Some feature selection techniques are embedded directly into machine learning methods (e.g., LASSO, decision trees) while others can be used as a preliminary filter (e.g., ranking features by significance assessed via a statistical test).

As we can see, there is a sequential process to transforming mouse data into mouse dynamics, into a well-tuned machine learning model to field its predictions, and into an applicable solution that generates value for you and your organization. This can be visualized as the pipeline below.

Machine Learning Applications Of Mouse Dynamics

To set the stage, we must realize that companies aren’t really known for letting go of their competitive advantage by divulging the ins and outs of what they do with the data available to them. This is especially true when it comes to tech giants with access to potentially some of the most interesting datasets on the planet (including mouse interaction data), such as Google, Amazon, Apple, Meta, or Microsoft. Still, recording mouse data is known to be a common practice.

With a bit of grit, you can find some striking examples of the use of mouse dynamics, not to mention a surprising versatility in techniques. For instance, have you ever visited an e-commerce site just to see it recommend something specific to you, such as a gendered line of cosmetics — all the while, you never submitted any information about your sex or gender anywhere explicitly?

Mouse data transcends its obvious applications, as is replaying the user’s session and highlighting which visual elements people interact with. A surprising amount of internal and external factors that shape our behavior are reflected in data as subtle indicators and can thus be predicted.

Let’s take a look at some further applications. Starting some simple categorization of users.

Example 1: Biological Sex Prediction

For businesses, knowing users well allows them to provide accurate recommendations and personalization in all sorts of ways, opening the gates for higher customer satisfaction, retention, and average order value. By itself, the prediction of user characteristics, such as gender, isn’t anything new. The reason for basing it on mouse dynamics, however, is that mouse data is generated virtually by the truckload. With that, you will have enough data to start making accurate predictions very early.

If you waited for higher-level interactions, such as which products the user visited or what they typed into the search bar, by the time you’d have enough data, the user may have already placed an order or, even worse, left unsatisfied.

The selection of the machine learning algorithm matters for a problem. In one published scientific paper, six various models have been compared for the prediction of biological gender using mouse dynamics. The dataset for the development and evaluation of the models provides mouse dynamics from participants moving the cursor in a broad range of trajectory lengths and directions. Among the evaluated models — Logistic regression, Support vector machine, Random forest, XGBoost, CatBoost, and LightGBM — CatBoost achieved the best F1 score.

Putting people into boxes is far from everything that can be done with mouse dynamics, though. Let’s take a look at a potentially more exciting use case — trying to predict the future.

Example 2: Purchase Prediction

Another e-commerce application predicts whether the user has the intent to make a purchase or even whether they are likely to become a repeat customer. Utilizing such predictions, businesses can adapt personalized sales and marketing tactics to be more effective and efficient, for example, by catering more to likely purchasers to increase their value — or the opposite, which is investigating unlikely purchasers to find ways to turn them into likely ones.

Interestingly, a paper dedicated to the prediction of repeat customership reports that when a gradient boosting model is validated on data obtained from a completely different online store than where it was trained and tuned, it still achieves respectable performance in the prediction of repeat purchases with a combination of mouse dynamics and other interaction and non-interaction features.

It is plausible that though machine-learning applications tend to be highly domain-specific, some models could be used as a starting seed, carried over between domains, especially while still waiting for user data to materialize.

Additional Examples

Applications of mouse dynamics are a lot more far-reaching than just the domain of e-commerce. To give you some ideas, here are a couple of other variables that have been predicted with mouse dynamics:

The Mouse-Shaped Caveat

When you think about mouse dynamics in-depth, some questions will invariably start to emerge. The user isn’t the only variable that could determine what mouse data looks like. What about the mouse itself?

Many brands and models are available for purchase to people worldwide. Their technical specifications deviate in attributes such as resolution (measured in DPI or, more accurately, CPI), weight, polling rate, and tracking speed. Some mouse devices have multiple profile settings that can be swapped between at will. For instance, the common CPI of an office mouse is around 800-1,600, while a gaming mouse can go to extremes, from 100 to 42,000. To complicate things further, the operating system has its own mouse settings, such as sensitivity and acceleration. Even the surface beneath the mouse can differ in its friction and optical properties.

Can we be sure that mouse data is reliable, given that basically everyone potentially works under different mouse conditions?

For the sake of argument, let’s say that as a part of a web app you’re developing, you implement biometric authentication with mouse dynamics as a security feature. You sell it by telling customers that this form of auth is capable of catching attackers who try to meddle in a tab that somebody in the customer’s organization left open on an unlocked computer. Recognizing the intruder, the app can sign the user out of the account and trigger a warning sent to the company. Kicking out the real authorized user and sounding the alarm just because somebody bought a new mouse would not be a good look. Recalibration to the new mouse would also produce friction. Some people like to change their mouse sensitivity or use different computers quite often, so frequent calibration could potentially present a critical flaw.

We found that up until now, there was barely anything written about whether or how mouse configuration affects mouse dynamics. By mouse configuration, we refer to all properties of the environment that could impact mouse behavior, including both hardware and software.

From the authors of papers and articles about mouse dynamics, there is barely a mention of mouse devices and settings involved in development and testing. This could be seen as concerning. Though hypothetically, there might not be an actual reason for concern, that is exactly the problem. There was just not even enough information to make a judgment on whether mouse configuration matters or not. This question is what drove the study conducted by UXtweak Research (as covered in the peer-reviewed paper in Computer Standards & Interfaces).

The quick answer? Mouse configuration does detrimentally affect mouse dynamics. How?

  1. It may cause the majority of mouse dynamics values to change in a statistically significant way between different mouse configurations.
  2. It may lower the prediction performance of a machine learning model if it was trained on a different set of mouse configurations than it was tested on.

It is not automatically guaranteed that prediction based on mouse dynamics will work equally well for people on different devices. Even the same person making the exact same mouse movements does not necessarily produce the same mouse dynamics if you give them a different mouse or change their settings.

We cannot say for certain how big an impact mouse configuration can have in a specific instance. For the problem that you are trying to solve (specific domain, machine learning model, audience), the impact could be big, or it could be negligible. But to be sure, it should definitely receive attention. After all, even a deceptively small percentage of improvement in prediction performance can translate to thousands of satisfied users.

Tackling Mouse Device Variability

Knowledge is half the battle, and so it is also with the realization that mouse configuration is not something that can be just ignored when working with mouse dynamics. You can perform tests to evaluate the size of the effect that mouse configuration has on your model’s performance. If, in some configurations, the number of false positives and false negatives rises above levels that you are willing to tolerate, you can start looking for potential solutions by tweaking your prediction model.

Because of the potential variability in real-world conditions, differences between mouse configurations can be seen as a concern. Of course, if you can rely on controlled conditions (such as in apps only accessible via standardized kiosks or company-issued computers and mouse devices where all system mouse settings are locked), you can avoid the concern altogether. Given that the training dataset uses the same mouse configuration as the configuration used in production, that is. Otherwise, that may be something new for you to optimize.

Some predicted variables can be observed repeatedly from the same user (e.g., emotional state or intent to make a purchase). In the case of these variables, to mitigate the problem of different users utilizing different mouse configurations, it would be possible to build personalized models trained and tuned on the data from the individual user and the mouse configurations they normally use. You also could try to normalize mouse dynamics by adjusting them to the specific user’s “normal” mouse behavior. The challenge is how to accurately establish normality. Note that this still doesn’t address situations when the user changes their mouse or settings.

Where To Take It From Here

So, we arrive at the point where we discuss the next steps for anyone who can’t wait to apply mouse dynamics to machine learning purposes of their own. For web-based solutions, you can start by looking at MouseEvents in JavaScript, which is how you’ll obtain the elementary mouse data necessary.

Mouse events will serve as the base for calculating mouse dynamics and the features in your model. Pick any that you think could be relevant to the problem you are trying to solve (see our list above, but don’t be afraid to design your own features). Don’t forget that you can also combine mouse dynamics with domain and application-specific features.

Problem awareness is key to designing the right solutions. Is your prediction problem within-subject or between-subject? A classification or a regression? Should you use the same model for your whole audience, or could it be more effective to tailor separate models to the specifics of different user segments?

For example, the mouse behavior of freshly registered users may differ from that of regular users, so you may want to divide them up. From there, you can consider the suitable machine/deep learning algorithm. For binary classification, a Support vector machine, Logistic regression, or a Random Forest could do the job. To delve into more complex patterns, you may wish to reach for a Neural network.

Of course, the best way to uncover which machine/deep learning algorithm works best for your problem is to experiment. Most importantly, don’t give up if you don’t succeed at first. You may need to go back to the drawing board a few times to reconsider your feature engineering, expand your dataset, validate your data, or tune the hyperparameters.

Conclusion

With the ongoing trend of more and more online traffic coming from mobile devices, some futurist voices in tech might have you believe that “the computer mouse is dead”. Nevertheless, those voices have been greatly exaggerated. One look at statistics reveals that while mobile devices are excessively popular, the desktop computer and the computer mouse are not going anywhere anytime soon.

Classifying users as either mobile or desktop is a false dichotomy. Some people prefer the desktop computer for tasks that call for exact controls while interacting with complex information. Working, trading, shopping, or managing finances — all, coincidentally, are tasks with a good amount of importance in people’s lives.

To wrap things up, mouse data can be a powerful information source for improving digital products and services and getting yourself a headway against the competition. Advantageously, data for mouse dynamics does not need to involve anything sensitive or in breach of the user’s privacy. Even without identifying the person, machine learning with mouse dynamics can shine a light on the user, letting you serve them more proper personalization and recommendations, even when other data is sparse. Other uses include biometrics and analytics.

Do not underestimate the impact of differences in mouse devices and settings, and you may arrive at useful and innovative mouse-dynamics-driven solutions to help you stand out.

]]>
hello@smashingmagazine.com (Eduard Kuric)
<![CDATA[Combining CSS :has() And HTML <select> For Greater Conditional Styling]]> https://smashingmagazine.com/2024/05/combining-css-has-html-select-conditional-styling/ https://smashingmagazine.com/2024/05/combining-css-has-html-select-conditional-styling/ Thu, 02 May 2024 10:00:00 GMT ` in a `