
Core UX Designer & Researcher
Role
Retail Q-Commerce
Domain
2 months
Timeline


Tools
_svg.png)
SITUATION
Identifying Gaps in the Chatbot Experience
The chatbot on a Q-commerce platform was designed to help customers resolve their issues independently, but users often struggled to resolve their issues. This led to frustration and unnecessary back-and-forth especially for high-intent customers attempting to explain their issues. We observed low success rates, and 70% of conversations required a chat agent to intervene as the bot couldn’t resolve the issue. We needed to add 2 new topics to the existing set of 5, without potentially adding to the confusion. To understand how to proceed, we examined the chatbot’s existing menu.

Upon analyzing all the topics and issues provided, we realized two important problem areas:
🔹 The mapping between topics and issues did not feel very intuitive
🔹 The language used in the copy seemed to be too technical
​​
But here's the challenge: There was no data to validate whether users were navigating multiple topics in search of the correct issue due to confusion, nor if they felt that the terminology did not reflect commonly used words.
🤔
So how could we identify what was causing the problem?
APPROACH
Discovering the Disconnect

We decided to conduct research to understand the situation better. To uncover how customers think and navigate, I chose to use Card Sorting—a UX research method that reveals how people naturally group and categorize information.
How would Card Sorting help?
In this research method, participants group concepts into categories that feel most intuitive to them, helping us determine if our design aligned with their mental model or needed adjustments. To avoid assumptions about the existing structure, I chose a hybrid card sort—this allowed participants to sort items into predefined categories while also creating new ones if needed. This method can be conducted using physical cards too, but we opted for an online tool for the ease and efficiency of analysing data.​
Snippet of User Journey | Make a Payment
TASK
Mapping Minds with Card Sorting



Snippets of the card sorting session in progress
I created digital cards for all topics and their corresponding issues. Participants started with all cards ungrouped and were tasked with sorting them in a way that made the most sense to them. Each participant completed the activity independently, without discussion or visibility into others' choices, ensuring unbiased results.

Overview of the card sorting process
Discussing the Where’s and Why’s
Once each participant completed the sorting, we had an engaging discussion to understand their reasoning. The session was lively, with participants eager to see how others had grouped the cards. It was fascinating to observe recurring patterns and differences in their thought processes.
​
To foster an open and productive environment, I emphasized that there were no "correct" or "incorrect" groupings—our goal was to assess whether the existing categories were intuitive. Even during discussions, I encouraged the team to avoid labeling responses as "right" or "wrong," ensuring we remained open to insights rather than reinforcing rigid assumptions.
TASK
Organizing, Deciphering, Prioritizing
We gathered a lot of promising data from the session, but that can be overwhelming to organize. The real challenge was to compile our observations in an structured manner that would communicate the impact of these insights to stakeholders in the clearest manner possible.
To effectively communicate the necessary changes to stakeholders, I created a colour-coded sheet highlighting the priority of each adjustment. A representative visual is shown below.

Given our limited bandwidth for changes, we prioritized the findings based on their criticality. Task prioritization can be subjective, but in this case, we relied on precise data points to guide our decisions.
I reviewed how each option was categorized by participants and tracked how often it was placed in a category different from the current app structure. We decided to focus on the highest-priority items, marked in the reds, representing crucial changes that required immediate action.
OUTCOME
A Customer-Centric Approach to the New IA
Based on insights from the card sorting study, I redesigned the chatbot’s information architecture to align with how customers naturally think about their order journey. Instead of restricting the number of topics upfront, we let the data guide us, resulting in five logical sections that made issue resolution more intuitive. By structuring topics around the customer’s journey, we ensured that users could quickly find the most relevant path to resolve their issues.
Key Highlights of the New Information Architecture:
🔸 Returns & Refunds section added since many users struggled to find these topics.
🔸 Consistent terminology using "order" for the purchase and "item" for individual products.
🔸 Clear order journey stages for intuitive mapping of topic to issue
🔸 "Payment" Replaces "Transaction" to utilise simple and commonly-used terminology.
🔸 Removed redundant categories like "Item Related Query" to streamline navigation.

Snippet of the revamped information architecture
TAKEAWAYS
Impact & Next Steps
Our research helped advocate for critical chatbot improvements, and we received approval for an improved information architecture. Even with better navigation, chatbot responses were still confusing, especially for high-intent users trying to explain their issues. This led to the integration of generative AI, which has now launched and has resulted in reducing the number of steps needed for issue resolution. Previously, selecting the wrong issue meant restarting the flow, adding unnecessary clicks—now, the chatbot dynamically adapts. The new IA will be implemented next, post some tech stack changes, and will further streamline the experience.
​
Exciting changes are in motion, and we’re already seeing improvements! 🎉