AI CASE STUDY

I built a Science-Based AI platform and this is what I learned

TL:DR

At the heart of designing a science-based AI platform is understanding and integrating human experiences. An effective AI platform must be intuitive, trustworthy, and empathetic towards its users. From challenges like acquiring data to crafting an intuitive interface to building trust in AI, the answer will always come from understanding the user. The journey of building an intelligent, empathetic, and relatable platform combines user-centric design with predictive AI technology. The following is my testament to improving user experiences through human-centered design.

Key Takeaways

  • Striking a balance between advanced technology and user-friendliness is crucial for success.

  • Integrate strategic tools to streamline data acquisition.

  • Designing an intuitive UI is as much an art as it is a science. It's about understanding the user.

  • Trust in AI isn't given, it's earned. Transparent processes and clear communication are key.

  • Feedback from users is gold. It guides the evolution of the platform more than anything else.

  • An AI platform must be more than efficient; it needs a touch of empathy to truly connect with users.

A Noble Pursuit

Imagine stepping into a world where the complex becomes simple, the cutting-edge meets user-centric design, and every click unveils a universe of scientific discovery. This isn't a glimpse into the future. It's the reality we're crafting in the realm of AI-driven platforms.

For the last few years, I've led the design team at Noble.ai with the mission of creating a science-based AI platform. It's been a real journey, full of tough challenges but also new discoveries. My work has sparked an ambition to make platforms user-friendly in a world of complex technology. Although I can't get into the specifics of this groundbreaking project due to intellectual property, I can share the wisdom it's brought me.

What I'm about to tell you isn't just a list of design trends for AI platforms; it's a guide for anyone who's on a similar path to bring a human-centered approach to a rapidly expanding space. Reading from my experience is an opportunity for you to learn and apply these concepts to your work. Whether you're an experienced designer, someone who's just starting out, or just curious, there's something here for you.

As we go through these insights, remember that everyone benefits from good design because no one is immune to it.

We'll cover data acquisition, building a user-friendly interface, the tricky bits of showcasing data, and the importance of building trust with your users.

Remember, good design is more than just looking nice or working well. It's much more important than that. It's about making an impact in the lives of the users who interact with it.

Let's begin from the top.

Part One: Data Acquisition

Getting the right data is the first big challenge. Even though we're in a digital age, many well-funded R&D departments are still stuck with old-school methods like writing lab simulations down on paper. This makes collecting data harder and creates a headache when trying to get this data ready for data wrangling. Assuming your data is already in digital format, you may still face challenges. Turning messy, unorganized data into something AI can use is tough. Raw data will often be formatted differently, making ingestion a nightmare. 

That's where user-friendly data management tools come in. Imagine a platform that makes this whole process easier. Great tools help users sort through and fix up their data and map it to your training template. For example, some tools handle CSV files, so individuals don't have to shift through the data manually. This can be a game-changer. These tools help clean up, organize, and transform data quickly and easily.

Another issue to consider when managing data is security. Your clients are deeply concerned with keeping their data safe. After all, you're handling their proprietary formulas. Utilizing the right tools can easily resolve security concerns by integrating with tools that offer a wide range of security features like encrypted data transfers, self-hosting options, and other autodeletion protocols. 

Some cool companies leading in this space are: 

To name a few. 

By integrating with these tools, you can make preparing data less of a hassle and give users more power and security. They can better use their data, turning a complex task into something manageable, maybe even fun. It's a win/win for everyone. In fact, your AI/ML teams will be so thrilled that they might just declare a new holiday in your honor.

Part Two: Designing an Intuitive UI

Creating an AI platform for science folks is tricky, especially when trying to meet the needs of scientists who haven't had the best tools to work with. Many of these bright minds are still tethered to outdated tools, not for want of change, but due to a scarcity of better options. So, there's a real need to design a UI that's not only technologically advanced but also clicks with users on a personal level.

The primary mission is to make this platform accessible to everyone without all the technical jargon and complexity that usually comes with scientific tools. It's all about reimagining user interaction, so running experiments is as straightforward and engaging as possible, without unnecessary technical hurdles.

Think about how smartphones work. They pack all sorts of complicated tech but are super easy to use. In the same way, an AI platform for scientists should make complicated tasks simple, without losing their power. We're aiming for a tool where doing a scientific experiment is as easy and engaging as snapping a photo on your phone. It should feel natural and accessible even for those without technical knowledge or a science degree. Authoring tasks should be straightforward, swift, and engaging.

Consider the following points:

Enhancing User Efficiency

  • Workbench Approach: Treat the platform as a workbench for users. Everything users need should be right at their fingertips so they can work smoothly and efficiently. 

Content Discovery

  • Breadcrumbs: Utilize breadcrumbs for clear taxonomy communication, ensuring users are always aware of their location in nested trees. It's like leaving a trail so users always know where they are.

  • Search Functionality: Implement a global and dynamic search feature for quick access to information.

  • Bookmarking: Enable bookmarking for easy return to frequently used areas.

Task Management

  • Steppers: Use steppers in tasks to communicate progress and scope. These show users how far they've come in a task and what's left.

  • Templates: Offer editable templates for repetitive tasks, allowing users to tweak minor details and save time.

  • Confirmation Pages: Incorporate confirmation pages to review tasks before final submission, reducing errors.

  • Recommendations: If possible, consider integrating a recommendation system that anticipates user needs, streamlining their workflow. A system that predicts what users might need next can really speed things up.

Progress Tracking

  • Toasts and Modals: Inform users about background processes using toasts and modals. These are little pop-ups that tell users what's happening in the background. 

  • Success and Failure Messages: Provide clear feedback on the outcomes of tasks, including reasons for failures.

  • In-App Support: Integrations like Intercom can help users get assistance right when they need it and also give you context into where they're having trouble. This solution can provide valuable analytical insights to identify user friction and drop-off points.

By incorporating some of these design components, you can make a platform that's not just easy to use but also really fits the needs of your users to a T.

Part Three: Effective Data Visualization and Results Management

When it comes to checking out results, it's really important not to tell users what counts as success. I've learned this from my own work. A lot of platforms try to set the rules for how users should see their wins, but it's better to let them decide for themselves. Picture giving users the controls in a spaceship cockpit, and they get to set things up how they want.

A robust AI platform should present a spectrum of visualization tools to suit different user preferences. Some may seek a comprehensive bird's-eye view for quick insights, while others might delve into minute details for in-depth analysis. Achieving this balance requires a dashboard that's as adaptable as it is intuitive. Imagine a widget-style interface that lets users tailor their visualization, making it as intricate or straightforward as their tasks demand. Everyone is different, so a dashboard with customizable widgets works great for keeping things flexible.

Now, about data visualization... this part's crucial. Think of data visualization as the star of the show in science-based AI platforms. It's not just a nice-to-have. It's what makes or breaks the user experience. Users should be able to choose from different styles of graphs that fit their data and needs best. Trying to make everyone use the same kind of graph can be a mistake, both costly and time-consuming, as I've found out.

Edward Tufte once said, "Good design is clear thinking made visible. Bad design is like stupidity made visible." This underscores the significance of customizing how data is presented to suit the user's requirements. The platform should showcase data in an exceptionally lucid and user-friendly manner, promoting clarity and providing value to the user.

Also, the ability to export data for use on other platforms is a big plus. This respects how different people work and lets them fit the AI platform into their daily workflow. But the end game? To make a platform so complete and user-friendly that users don't even think about going elsewhere for extra data work or fancy visuals.

Part Four: Building User Trust in High-Risk Scenarios

The big challenge for AI is convincing people that it is trustworthy. It works so fast that people distrust its thoroughness. In high-tech, especially AI, people only trust a product based on how valuable or efficient they think it is. Robert Cialdini explores this phenomena in his book "Influence: The Psychology of Persuasion". He says that if something seems super valuable or works really fast, people might actually become suspicious of its credibility. They often think that if it took more time, it must be better, like how fast food is seen as lower quality because it's quick.

This is especially true in high-risk or high-reward situations. Take virus scanners as an example. They're high-risk because missing a virus can cause tremendous problems, but high-reward because catching them keeps user data and computers safe. These traditional virus scanners take ages to go through files, which makes people think they're really thorough. But what if an AI scanner could check all your files in just 30 seconds instead of an hour? If an AI scanner were to do the same job super fast, people might doubt it. They'll wonder how it can be good if it's that quick.

But there are smart ways to handle this. Look at TurboTax. It deals with the risky business of tax filing by breaking it down into easy steps. This is another high-risk, high-reward scenario where messing up can be the difference between saving money or getting a visit by the men in black. TurboTax keeps users in the loop, showing them what's happening and pointing out any mistakes.

Want to know my favorite part?

The fake 'sending bar' when you e-file. This bar, which indicates data being sent to the IRS, is actually fictitious. Tax returns are sent instantaneously, but a sudden transmission might not instill trust. So, they added a fake loading animation to make users feel like everything's going smoothly. This is a brilliant design choice that prioritizes user needs and trust.

Speed isn't always better, especially if it undermines user trust. When running tasks on your platform, it's okay to slow down the process and reassuring users through components like loaders, real-time activity monitors, and action reports. These elements are necessary for building and maintaining user trust.

Part Five: The Vital Role of User Feedback in AI Platform Development

The true power of AI lies in its remarkable capacity for learning and adapting. At the root of this adaptability is user feedback. It is a critical component that allows models to rectify mistakes and align more closely with what users really want and need.

Thinking of feedback mechanisms as a courtesy is a common product design mistake. Collecting feedback on an AI platform is an integral part of maintaining the platform's effectiveness and relevance. This feedback loop creates a level of customization and personalization that you just can't get with standard models. As users engage with the AI, their feedback plays a key role in refining the algorithms, making sure that the results the AI provides get sharper and more relevant as time goes on.

As users begin the journey of verifying their simulated results, a robust platform's duty extends to collecting their feedback on the quality of these results. Questions like "Were the results accurate?" or "Why or why not?" become pivotal in this feedback loop.

No matter what your AI platform does, it needs to keep learning and getting better. The leading companies in AI technology understand this well. They ensure that every output is paired with a mechanism for users to evaluate its quality, thereby enabling ongoing recalibration of the model. For example, in content generation models, user feedback on tone, style, and relevance helps refine the natural language processing algorithms. In predictive models used in finance, healthcare, or, in this case, scientific discovery, feedback on prediction accuracy and utility guides developers in enhancing the model's predictive capabilities. This helps to keep improving the system. Feedback collection comes in many forms, but the main thing is it should be easy for users to tell you what they think about their experience.

Look at how other platforms do it. Yelp has its star ratings, Rotten Tomatoes has its "Tomatometer," and then there's the famous 'like' button on Meta's platforms. These are simple but effective ways to find out what users think, and they all serve as simple yet effective means of increasing the accuracy of their algorithms. They're beneficial for the companies and let users play a part in shaping the platform.

In situations with a lot to gain or lose, like with High-Risk, High-Reward scenarios, it's good to have a way for users to give more detailed feedback. This approach allows users to articulate their experiences and concerns comprehensively, providing you and your team with richer data for improving their model. The key is to strike a balance, making the feedback process detailed enough to be informative, yet simple enough to encourage widespread user participation.

Remember the earlier emphasis on trust?

By taking their feedback seriously and using it to make changes, we don't just improve the platform; we also build stronger trust and commitment from our users. When users see that you're actually listening to what they have to say and making changes based on their feedback, they start to feel like they're part of the process. They begin to trust the platform more because they see it's responding to their needs. This is super important in AI, especially with generative models where the output really matters to how users do their work and make decisions.

An AI model that changes and gets better with user input isn't just seen as another tool; it becomes more like a teammate. Users start to view the model as something they're working with, not just something they're using. This kind of partnership feeling is a big deal for getting users to really engage with and trust the AI. The challenge, however, lies in encouraging consistent and constructive feedback. This requires designing interfaces and feedback mechanisms that are intuitive and accessible. Whether it's through simple rating systems, detailed surveys, or interactive forums, the goal is to make the process of giving feedback as seamless as the process of using the AI model itself. By embracing and prioritizing user feedback, we ensure that AI generative models stay dynamic, centered around the user, and in a state of constant evolution.

Conclusion: Embracing the Complexity with a Human-Centered Approach

Wrapping up this chat, my journey over the past few years has shaped a story that's not just about product design but about putting people at the center of AI solutions.

From handling messy data to making an interface that really speaks to scientists, every step is about understanding and innovating with people in mind. It's a reminder that behind every tech breakthrough, there's a human side that needs understanding, respect, and trust.

Working with AI is an opportunity to connect technology with its users. The real challenge is to create something that's not only efficient but also builds an experience that users can trust and feel comfortable with, even with all the complicated stuff hidden under the hood. As people who create and build these things, our job goes beyond just programming; it's about adding trust, intuition, and empathy into our work.

So, as you start your own projects in AI, keep these ideas in mind. Let them guide you as you mix advanced tech with the human touch. Remember, the aim isn't just to make a platform that does tasks but to create a partner that works with users, understands them, and boosts their abilities. Every move is in tune with what users need and want. After all, the most advanced technology is only as great as the human experiences it enriches.

DESIGN ADVICE #5

The key to AI adoption is not just in its intelligence, but in its ability to build trust.