I got COVID two days before demo day and the presentation of this "insight reflection." It's not finished (and probably won't be) but it has some valuable lessons on design so I thought I'd share. My team and I designed an AR grocery shopping experience.
I was sitting underneath the mini Olin Pavillion slapping my socks as the misquiotoes tried to claim my blood when an Olin student named Ian walked up to me, introducing himself. I stood straight up. You can imagine a hunched over individual shooting up as quickly as the jack-in-the-box toy does when you complete the last spin. It was my second week at Olin, I’d been spending more time around the university and became [increasingly] more delighted with each additional interaction.
Ian and I talked about which domain ruled the world. Not in the sense of autocracy. We were thinking about who built most of the usable and important things in our daily lives. Was it engineers or designers? Probably not business people, at least not anymore. What about architectures or artists?
Engineers rule the world was the title of a draft essay I wrote in the summer. This was also the sentence I opened our conversation with. Civil engineers build the world’s infrastructure. Software engineers control our interaction with devices. Electrical engineers build more powerful computers.
While those things are true, the better sentence would’ve been engineers have leverage but the don’t rule the world. Ian reminded me that designers create the interactions of software before giving it to an engineer to build. Civil engineers work with architectures who’ve created high fidelity blueprints. Electrical engineers work with schematic designers before building computers. Design is perhaps the first layer of the “stack” to building anything.
I began studying at Olin because a) these conversations were important to me. I had a blindspot in viewing the building stack before my conversation with Ian and now I have an insight to test against. And b) because I wanted to try to build the full-stack. I began working in front-end development at the beginning of the semester with languages like JS, HTML, and CSS and wanted to use User Experience Design @Olin to build UI skills so I could pair delightful design with well-written software.
Each class at Olin is project-based so I worked with a team to build a usable interface. I’ll spend the first part of the essay talking about what we built and our design process. We tested with four users before our demo day. The second part of the essay highlights our design pipeline and the process of moving from low to high fidelity.
If you’d like to view the final project before reading the essay, you can visit our github page.
Q: Why is it a good idea to show the users what the experience looks like before they log in or sign up?
A: As a designer, our job is to guide users to “aha” quickly. This means that within seconds of viewing a welcome screen a user understands the value proposition and is excited to get started. The goal of our welcome screen is to convey the value prop of AR shopping through a mockup of what the experience looks like. You can view a sustainability score, reviews, and the price of multiple products from your shopping list without needing to move down the aisle. In addition, all shoppers can click on the three leaves and view the sustainability score of a product, something that more customers want to know before they purchase.
Q: What do users think of single/social sign on?
A: In Q2 2016, almost 93% consumers preferred social login over traditional email registration on websites. The technical term for social login is single sign on. In other words, logging in with Google, Facebook, or Apple. Our team chose to offer single sign-on because it was the most popular choice of logging in from the users we surveyed. Not only do they have to remember fewer passwords but they can decide to give our platform access to their grocery lists or google searches which have information on products they’d like to purchase. Single sign on provides makes our platform more valuable because we can convert unintentional querying for soap into a push notification to a user asking them if that item should be added to their shopping list.
Part of our user group has concerns about data privacy so using single sign-on is something they’re skeptical about. Because you’re giving data from one platform (Facebook) away to another (Little Trees), Little Trees gains access to a user’s birthday, contact list, facebook groups, and more. Little Trees can now build a profile of a user with information that users may regret consenting too. Instead of successively asking a user if they consent to giving away contacts, adding notifications to our site, pairing with bluetooth, sending location access, designers can display a full list of privacy options that users toggle on and off. Empirically, users enjoy privacy pages where they can continuously edit and view what permissions they’ve given a platform instead of being bombarded with allow/disallow permissions after signing in or a million words of privacy policies. It’s also unwise to assume that the permissions a user has given to Google transfer to a new platform as the comfortability and usability of both may warrant different preferences.
Q: Our team debated the placement of the eye next to the password button, why is this something to discuss?
A: When users see an eye next to a password button they think one of two things: 1) I’ll be able to see my password if I mistype and correct the error 2) I will see my password if I click on the eye. I didn’t know about the latter answer until talking with a user this weekend. Many mobile sites have transitioned passwordless, using a biometric or personal identification as the key to unlock an account or device but web based platforms have kept the eye button to help with the first persona, the mistypers. Our team thought about iris detection as a biometric to unlock Little Trees access since we’re an AR based platform. Iris scanning is the most secure authentication method and has the lowest rejection rate out of face, fingerprint, passwords, and pins.
Q: What are the general rules about a drop shadow that designers think about?
A: Drop shadows are debated in the design community with some designers advocating for little to no use of them. Our design includes many drop shadows, such as the purple login button. We wanted to add depth and emphasis to our design so users weren’t staring at flat buttons while navigating through our experience. Most of the shadows we used were near the 6 density pixel range. The higher the dp value is, the more elevation a user percieves between the surface and the container. There were a few things we learnt while designing with drop shadows. 1) Drop shadows should have low opacity and this percentage is determined by the background of the design. On some of our AR designs, the drop shadow has lower opacity, making it difficult for users to tell the difference between a flat button and one with a drop shadow. When we tested with users, they preferred subtle, lower opacity drop shadows so they could get the 3D feel of the content without knowing that the element had a shadow.
Our team rule was to use drop shadows when we wanted to differentiate between a container and a background. For example, the login screen is the most important asset on the welcome page because it will take a user to the next screen. In essence we’re giving this asset a glowing effect, hoping a user recognizes the objects importance related to the rest of the page.
Q: What should I keep in mind when designing a border radius?
A: Similar to the eye button next to a password textbox, corner radius’ (how rounded is a polygon) on buttons signify different meanings to a user. For example, 50 px is standard for tags. Notion databases are a good example of this, whereas, sign in buttons have a 10 px radius. Users associate different pixel values with different meanings and prefer consistency of these pixel values across each platform they use. Corner radius’ are an example of a standard that if changed, could increase the learning curve of a user. For example, if we switched the meanings of a 50 px and a 10 px button on our website the user might need to learn something different. What seems like a trivial design choice could be the difference between a user deleting something from their cart and a user sending a $100 tip to the cashier who helped them check out.
Q: When do you choose to include a gradient? Do other AR screens have gradients and if so, what do their gradients look like?
A: Gradients should be chosen when you want users to be drawn to certain containers of the website. If drop shadows provide depth to signal an asset’s importance, gradients provide color to do the same. For example, if there’s a call to action button a user should press.
A counterintuitive insight our team got as we were designing gradients is, it’s better to use analogous colors instead of complementary colors. In other words, create a gradient with colors that are close to each other on the color wheel. In addition, gradients should be smooth so the user doesn’t know when one color stops and bleeds into the next. If the transition is too abrupt, users won’t be delighted.
The color of our gradient was a mix of forest and leaf green because we want our users to be thinking about sustainability and nature as they’re shopping.
AR screens typically include an opaque or semi-opaque container so legibility is improved. Because containers are superimposed into a users’ environment, it’s important that the color chosen by the designer works in all backgrounds. Our design included a linear gradient with medium opacity instead of a solid color with opacity because when we A/B tested what users preferred, they selected the former. The linear gradient complemented the text output better than the solid color with opacity. In addition, it’s recommended that designers use linear gradients for squares and radial gradients for round areas.
Meta’s used some gradient backgrounds in their early AR prototypes.
Q: Why is a user configuring the experience at home instead of the grocery store?
A: We imagine that users will be using AR glasses by 2025 at the earliest. For most, this will be their first experience with an AR headset. We wanted to be intentional about having users onboard in an environment, like a living room, that they’re comfortable with. This way, they can test out the features and walk around their space as if they were shopping before the enter the grocery store.
Q: Why is it important to have someone greet you during the onboarding?
A: Our group A/B tested onboarding with and without a face. Users preferred having Terry at the top right of the screen because they felt like a conversation was being had. If we would’ve had more time, our group considered adding an animation to the text, simulating a chatbot interaction or iMessage conversation so if the user had questions about the sustainability score or unique parts of the product that a user hadn’t seen before, they could ask. We were also intentional about showing the customers how the product can be useful to them through the avatar Terry. It resembles a friend recommending features of a product instead of a company giving the stack of features to a new customer.
Q: What is important for a user to know when they begin using a new platform?
A: Although obvious, our team kept these steps in mind: