The adoption challenge with AI-powered personal assistants

Introduction

AI-powered personal assistants are becoming more common in professional settings, but their adoption comes with concerns about data privacy and security. This was especially true for Lighty.AI, a tool designed to integrate seamlessly with Slack, email, and calendar apps.

In this case study, I’ll share how we tackled these challenges by focusing on transparency and trust, ensuring that users understood how the AI worked and felt confident using it in their daily workflows.

The Challenge: Building Trust in AI

The AI Adoption Dilemma

When introducing Lighty.AI, we faced a key challenge: building user trust. Many professionals were hesitant to connect a tool that accessed sensitive work data. There were also questions about whether company policies allowed such integrations.

We needed to create an experience that reassured users about what data was accessed, how it was used, and why it benefited them.

Key Barriers to Adoption

  • Data Privacy: Users were concerned about sharing personal scheduling and communication data.

  • Control Issues: There was a fear that AI might make decisions without proper context.

  • Transparency Gaps: Users wanted clear explanations about how AI scheduling worked.

  • Security Uncertainties: Many weren’t sure whether using AI assistants aligned with workplace policies.

Approach: Transparency Through Design

We focused on meeting users where they already worked—within tools like Slack and Google Calendar—while ensuring they understood the AI’s role in their workflows.

1. Storyboarding & Explainer Videos

To address transparency concerns, I developed storyboards and scripts for short explainer videos that showed:

  • How to use the product

  • How to install it

  • The impact on team productivity

By contextualizing engineering demos within real user scenarios, we demonstrated how Lighty.AI could enhance efficiency without compromising privacy.

Starting with looks sketches we begin to block out the story from the script into visual storyboards.

Every review we add more refinement to key screens and styling

Then after a direction is selected … We animate! Using After Effects Ill create a composition and bring in the scenes, voice over, music and block it out.

2. Contextual Feature & Visual Explainers

Instead of overwhelming users with technical jargon, we integrated education into the product experience:

  • Short, one-minute explainer videos focusing on key features.

  • Scenario-based storytelling to make AI interactions relatable.

  • Step-by-step guidance on installation and data usage.

Again.. we start with a script and then create storyboards to match and blockout the timeline.

Collecting feedback from the team on the timing, blocking, and visual representations of the story.

There were some cycles around how to illustrate the data ‘ecosystem’.

Exploring ways of how to show the data flowing and Lighty AI thinking as a personal agent.

These efforts helped users see how their data was being used before they committed to connecting their accounts and how data is sampled, agents learn, process and think.

The Outcome: Growth Through Trust

Once we released our trust-focused marketing campaign, the results were clear and measurable:

  • Exponential growth in user adoption.

  • Higher website engagement after watching the videos.

  • Increased conversions—users who viewed the demos were more likely to sign up.

One of the biggest achievements was SOC 2 Type 2 compliance, a milestone that validated our commitment to security and privacy—not just as a technical requirement, but as a user-centered design principle.

Conclusion

By prioritizing transparency, education, and real-world context, we helped Lighty.AI users feel in control of their data while benefiting from AI-driven efficiency. Rather than seeing AI as a risk, users saw it as an asset—one that worked with their tools, on their terms.

How I Use AI in My Design Workflow

0