Empowering Trust: Sources for LLM Responses
Empowering Trust: Sources for LLM Responses

Seamless Source Verification to Boost Workplace Confidence and Efficiency

Overview

As language models (LLM) grow more sophisticated, users increasingly demand visibility into the sources behind generated responses. This project focused on designing an intuitive system to show citations as cards that accompany LLM-generated responses, ensuring transparency and encouraging further exploration.

Role

Lead UX, Visual design and Project co-ordinator

Role

Lead UX, Visual Design & Project Co-ordinator

Role

Lead UX, Visual design and Project co-ordinator

Team

Lead designer, 2 PMs, Lead FE & Head of AI

Team

Lead designer, 2 PMs, Lead FE & Head of AI

Team

Lead designer, 2 PMs, Lead FE & Head of AI

Timeline

1 month

Timeline

1 month

Timeline

1 month

Background

As companies increasingly rely on AI-driven chatbots, the need for validating the accuracy of AI-generated information becomes crucial.

The virtual assistant (bot-driven) leverages generative AI and natural language processing to deliver conversational responses, enabling employees to access critical company-wide information.

However, LLMs are not always perfect, and employees needed a way to cross-check the answers without disrupting their workflow. The existing chatbot interface offered answers but lacked an efficient way to verify bot responses.

Short-term fix

When we first had the requirement to show article links within the bot response, we had taken a design decision to introduce inline chips - clickable source links embedded directly within the bot’s text.

This quick implementation provided immediate visibility into sources, reassuring users that information were grounded in real policies. However, for complex responses requiring multiple citations, inline chips overwhelmed the interface, turning concise answers into a chaotic string of links. Users faced fragmented readability, especially on mobile, and the design lacked space to prioritize high-impact sources.

Though inline chips were effective for simple use cases, the inability to gracefully handle dense sourcing revealed the need for a more structured, scalable solution.

sources-before

Inline chips were used to display sources within the bot response

sources-before

Inline chips were used to display sources within the bot response

sources-before

Inline chips were used to display sources within the bot response

Problem statement

Employees often faced difficulties validating bot-generated responses. This caused frustration, as it was time-consuming to search for sources and articles elsewhere. Without an efficient means of validating responses, employees were uncertain whether the data provided was credible, leading to potential errors and follow-up queries for clarification.

Employees found the existing design challenging - it only allowed for showing limited sources via inline chips. Adding chips to responsive bot bubbles created chaos. They constantly toggled between bot responses and sources, which increased cognitive load and caused frustration.


How can build employee trust by ensuring LLM-generated responses are better sourced, accurate, and easily accessible?

Research & Insights

As part of my initial phase, I conducted a UX audit to identify critical pain points - user flows, manual data-entry bottlenecks, and fragmented source visibility.
To inform our approach to source attribution, I analyzed how leading AI platforms -ChatGPT, Perplexity, Grok, & Google Gemini handle source visibility.

  1. ChatGPT displays a simple sources button which, when clicked, reveals the sources in a list format under a side sheet.

  2. Perplexity shows above section to show source cards and expand those through overlay sidebar that blurs the chat window, balancing focus with accessibility, though it disrupts context.

  3. Grok shows sources in a tabbed format, separating between X posts and web pages, offering a visual distinction between different types of sources.

  4. Google Gemini highlights sources in footnotes, favouring scalability for dense information.

These platforms highlighted few insights:

  1. Context Preservation: Perplexity’s overlay inspires non-destructive design, keeping the chat visible but dimmed.

  2. Scalability vs. Clarity: Gemini’s collapsible footnotes revealed the need for expandable sections to handle 8+ sources without clutter.

  3. User Control: Grok's subtle citations emphasized letting users choose when to dive deeper, avoiding forced interruptions.

By blending these insights, and staying focussed on our goal, we ideated a contextual sidesheet that surfaces sources on-demand, prioritizes scannability, and adapts to diverse content types - ensuring trust without compromising usability.

Perplexity's overlay sidesheet approach to display sources

Perplexity's overlay sidesheet approach to display sources

Perplexity's overlay sidesheet approach to display sources

User needs

My desk research and discussions with stakeholders, existing customers highlighted several key user needs.

  • Quick validation: Employees needed a simple way to validate LLM responses.

  • Non-disruptive workflow: Employees needed the source verification process to be seamless, without interrupting their ongoing tasks.

  • Contextual access: Employees wanted sources to be easily accessible within the chat interface, rather than requiring navigation away from it.

Ideation & Design

Concept development

  • I began with sketches and wireframes, focusing on how to incorporate source cards seamlessly into the chat interface.

  • I defined flows for both web and mobile, ensuring that the citation cards were accessible without overwhelming the primary response area.

  • The overlay approach, drawn from Perplexity’s design, initially appeared promising. Its core strength lies in minimizing visual noise and directing undivided attention to the source list.

Ideation & Design

Initial sketch where side sheet interaction is explored

Initial sketch where side sheet interaction is explored

Initial sketch where side sheet interaction is explored

Ideation & Design

Refinement - Iteration 2

  • Stakeholder discussions, informed by prior user preference and historical use cases, drove the decision to abandon overlays. Central to the debate was the necessity for simultaneous visibility of responses and sources - existing workflows revealed that users relied on cross-referencing information when referring multiple source.

  • Thus, removing overlay layer that kept responses and sources side-by-side made more sense.

  • In the next iteration, I explored around showcasing web articles container in a light box kind of interface and give the user the freedom to navigate between different sources.

Ideation & Design

A lightbox approach where users can navigate between different sources at once

A lightbox approach where users can navigate between different sources at once

A lightbox approach where users can navigate between different sources at once

Ideation & Design

More refinement - Iteration 3

  • A design review call with stakeholders, mainly the engineering lead, revealed some technical challenges. The challenges were refactoring our webview system to support iframe-based article loading. Restructuring how webviews were rendered - originally designed to load into chat window - exceeded the project’s timeline, forcing us to pivot toward simpler, non-disruptive sourcing solutions.

  • In this next iteration, the focus shifted towards integrating source material directly within the chat interface. This approach aims to allow users to seamlessly switch between different sources without losing context or flow.

Ideation & Design

3-col layout approach where web articles load in chat window

3-col layout approach where web articles load in chat window

3-col layout approach where web articles load in chat window

Ideation & Design

One more addition - Related search results

  • One of the feedback was around adding relevant search articles that are published based on the policy, FAQs and SOPs data

  • Enterprise knowledge often contains duplicated or overlapping information. The goal of displaying related search results was to enhance transparency for employees regarding the article response while also improving navigation.

  • This ensures employees can easily access relevant content if the LLM response appears in a related article.

Critical juncture

The Responsive Design Dilemma

  • Initially, the 3-column layout (left panel, chat area, sources) seemed ideal - users could cross-reference sources without losing context.

  • However, on devices with 768px–1024px widths, the layout collapsed into a cramped mess. This led to a chaotic user experience where readability and interaction were compromised.

  • After presenting an example screen to stakeholders, it became evident that the design was not scaling well for narrower screens. The feedback session highlighted the perils of a shrinking section, leading us back to the drawing board to re-evaluate our approach.

Breaking the Deadlock: Adaptive Grids

  • After thorough discussion, we decided to explore alternatives. I proposed revisiting the layout for specific screen sizes, focusing on devices with widths between 960px and 1199px, where the three-column layout was particularly problematic.

  • To solve this, I proposed hiding the left panel on opening of the sidesheet, in order to accomodate the article view. This was not flagged "bad UX" at first, but after doing a quick round of usability testing, the feedback helped me convince stakeholders and we were through this rough patch of critical juncture

  • Next, we settled on a two-column layout for this range, and mapped out layout style for different screen sizes as detailed in the table below:

Final design &

Implementation

Source cards

  • To ensure consistency across AI pipeline references, I standardized structure of source cards which will go into Sources sidesheet, aligning their visual language.
    Example - layout, title hierarchy, spacing etc.

  • Iterations 1 to 3 progressively introduce elements like icons to improve content clarity, with each iteration refining the balance between simplicity & added detail.

  • The final iteration 4 excels in clarity, simplicity, and effective visual hierarchy, making content easily identifiable and readable

Final design &

Implementation

Decisions around Interaction Design

To foster trust and streamline source validation, the design prioritizes intuitive interactions that adapt across devices:

  • Dual access for flexibility: The side sheet opens via “View all” CTA or a first few snippet cards, letting users choose between holistic exploration or focused verification.

  • Dynamic CTA feedback: “View all” becomes “Close” when the side sheet is active, mirroring familiar patterns to signal state changes. This helps in reducing cognitive load by avoiding ambiguous UI states.

  • Side sheet auto-closes when the composer is used: Ensuring focus shifts seamlessly to new queries (e.g. - typing “Explain promotions policy” hides sources to prioritize typing).

  • Mobile-first adaptation: On mobile, a bottom sheet replaces the side sheet, sliding up to display sources while blurring the background response

Here, you can see a working prototype to understand how this works, end-to-end.

Ideation & Design

Source card component states and types

Source card component states and types

Source card component states and types

Ideation & Design

Source snippets

  • To give some idea on the availability of sources under the LLM response, I needed to design the snippets of these sources. In first few iterations, I explored the lengths of snippets, the main goal was to keep it compact and space-efficient. The idea was to to give users an indication of source chips which will make it easier to scan and identify key sources quickly.

  • The compact format ensures primary information remains prominent while secondary content stays accessible.

source-cards-iterations
source-cards-iterations
source-cards-iterations
source-cards-iterations

Ideation & Design

A set of all the components used for this feature

A set of all the components used for this feature

A set of all the components used for this feature

A set of all the components used for this feature

Final design &

Implementation

Decisions around Interaction Design

To foster trust and streamline source validation, the design prioritizes intuitive interactions that adapt across devices:

  • Dual access for flexibility: The side sheet opens via “View all” CTA or a first few snippet cards, letting users choose between holistic exploration or focused verification.

  • Dynamic CTA feedback: “View all” becomes “Close” when the side sheet is active, mirroring familiar patterns to signal state changes. This helps in reducing cognitive load by avoiding ambiguous UI states.

  • Side sheet auto-closes when the composer is used: Ensuring focus shifts seamlessly to new queries (e.g. - typing “Explain promotions policy” hides sources to prioritize typing).

  • Mobile-first adaptation: On mobile, a bottom sheet replaces the side sheet, sliding up to display sources while blurring the background response

Here, you can see a working prototype to understand how this works, end-to-end.

Takeaways &

Lesson learned

  • Increased credibility: Early released to a new accounts showed a measurable increase in user trust scores when citation cards were present.

  • Enhanced engagement: Users spent more time interacting with the source cards, indicating a deeper interest in the provided references.

  • User-Centric Design: Continuous user testing and stakeholder feedback were crucial in iterating a design that met both functional and aesthetic requirements.

  • Flexibility: The ability to iterate through multiple design versions ensured that the final product was both practical and goal-oriented.

  • Transparency as a trust builder: Clearly displaying sources not only enhances the credibility of AI responses but also empowers users to verify and further explore the content.

Takeaways &

Lesson learned

  • Increased credibility: Early released to a new accounts showed a measurable increase in user trust scores when citation cards were present.

  • Enhanced engagement: Users spent more time interacting with the source cards, indicating a deeper interest in the provided references.

  • User-Centric Design: Continuous user testing and stakeholder feedback were crucial in iterating a design that met both functional and aesthetic requirements.

  • Flexibility: The ability to iterate through multiple design versions ensured that the final product was both practical and goal-oriented.

  • Transparency as a trust builder: Clearly displaying sources not only enhances the credibility of AI responses but also empowers users to verify and further explore the content.

Takeaways &

Lesson learned

  • Increased credibility: Early released to a new accounts showed a measurable increase in user trust scores when citation cards were present.

  • Enhanced engagement: Users spent more time interacting with the source cards, indicating a deeper interest in the provided references.

  • User-Centric Design: Continuous user testing and stakeholder feedback were crucial in iterating a design that met both functional and aesthetic requirements.

  • Flexibility: The ability to iterate through multiple design versions ensured that the final product was both practical and goal-oriented.

  • Transparency as a trust builder: Clearly displaying sources not only enhances the credibility of AI responses but also empowers users to verify and further explore the content.

Next steps

  • Post-Launch monitoring: Analyze user interactions with the source cards to identify any areas for further optimization.

  • Iterative improvements: Use user feedback and analytics to refine the design continuously.

  • Broader rollout: Explore expanding adding source model to other AI-powered features on different agents.

Email me on himuxworks@gmail.com

Himanshu Rathor 2025 ✍️

Email me on himuxworks@gmail.com

Himanshu Rathor 2025 ✍️

Email me on himuxworks@gmail.com

Himanshu Rathor 2025 ✍️

Email me on himuxworks@gmail.com

Himanshu Rathor 2025 ✍️