AI in UX Studio: How to build custom GPT for your next UX project
A Practical Guide to AI for UX Designers
I still remember the days when designing meant opening Photoshop or sketching wireframes on a whiteboard. Those early tools helped us shape ideas, but let’s be honest — the process was often clunky, time-consuming, and full of room for miscommunication.
Then came a wave of design tools that transformed the way we work — faster workflows, high-fidelity MVPs, and smart prototypes. Prototypes that once took days now came together in hours, drastically reducing confusion across teams.
And now, here we are — with AI bringing us a brand new toolkit. Tools that not only speed up our processes but also increase accuracy, giving us results we never imagined possible in such a short time. AI is quickly becoming a designer’s and researcher’s secret superpower — and product teams are falling in love with it.
So today, I want to share an intro to AI for designers and a quick tutorial, how to train your very own Custom GPT as a designer or researcher. This is the first edition of AI in UX Studio topic, in future you will learn more practical tutorials. So, Let’s start with some basics, shall we?
What Are LLMs?
You’ve probably already experimented with ChatGPT or other large language models (LLMs). These AI systems can support your design and UX work in broad, powerful ways. But let’s break it down a bit.
In theory, large language models (LLMs) are advanced AI systems that understand and generate natural language — human-like text — using the data they’ve been trained on through machine learning techniques. But let’s break it down into something simpler — after all, our job as designers is to simplify complex ideas.
Let me start with a very basic example that we’ve been seeing for years: your mobile phone keyboard.
Every time you type something using your on-screen keyboard, it suggests the next word. This way, you don’t need to type out every letter each time. You see? If you often write the same set of words, your keyboard picks up on the pattern. It learns and remembers.
Here’s a more real-world example: let’s say your friend’s name is John. Every time you’re texting or writing to others, you often say “John wants…” So the next time you type “John,” your keyboard offers “wants” as a suggestion. Why? Because that’s the most likely word you’ll type next, based on your past behavior.
That was a very simple example. Let’s expand it a bit.
In the case of LLMs, the input to the neural network is a sequence of words, and the outcome is simply the next word. It’s a classification task. The only difference is that instead of just a few options, the system has to choose from tens of thousands — sometimes around 50,000 words. This is what language modeling is all about: learning to predict the next word.
Of course, the actual system is much more complex than this, but our goal here isn't to learn the intricacies of data science. We want to grasp enough to understand how to use it effectively.
And here’s where it gets even more exciting.
Transformers: The ‘T’ in GPT
This is where the Transformer comes in—no, not Optimus Prime :)
In 2017, a groundbreaking paper was submitted to Cornell University by researchers at Google Brain —a deep learning AI research team within Google. In that paper, they introduced a new, simpler network architecture called the Transformer — which is what the "T" in GPT stands for. It was based solely on attention mechanisms, completely removing the need for recurrence or convolution layers.
Let’s simplify it once more with an example.
In the image shown below, the model’s input is the partial sentence “John wants his bank to cash the…” The Transformer model figures out that "wants" and "cash" are both verbs (though they can also be nouns). We’ve represented this extra context in red parentheses, but in reality, the model modifies internal word vectors in ways we can’t easily interpret.

The second Transformer layer adds more context. It determines that "bank" refers to a financial institution rather than a riverbank, and that "his" refers to "John."
This diagram is purely hypothetical, so don’t take every technical detail too seriously. What matters here is the concept: the model works on predictions. Every input helps it predict what comes next.
So, it’s clear now that we need some form of training to help the model predict and respond better. Back to our earlier example — if one user always writes “wants” after “John,” the system will suggest it. But for another user with a different writing habit, the suggestion will vary.
It’s actually not difficult to gather large amounts of data for this kind of “next word prediction” task. There’s an abundance of text out there — on the internet, in books, research papers, and more. And we can build massive datasets from all of it.
This architecture is the foundation of GPT — Generative Pre-trained Transformer.
So What is GPT?
Well, language modeling is just the foundation — not the whole story. So what does GPT in ChatGPT stand for?
GPT stands for Generative Pre-trained Transformer. Let’s break that down:
LLMs like GPT-4 are trained on hundreds of billions of words. During training, they play a giant guessing game:
“Given all these words, what’s the most likely next word?”
Every time they guess wrong, they get feedback and adjust their internal parameters (called “weights”).

After millions (or trillions) of guesses, they become really good at identifying language patterns, logic, reasoning — even humor.
Once the base model is trained, it can be:
Fine-tuned on more specific data (e.g., legal documents, medical texts, or UX guidelines)
Customized with instructions and files (like when you build a Custom GPT)
This is where you step in. Without needing to be a developer, you can shape a GPT that understands your work, your voice, and your process.
This Is Where YOU Come In
You can steer the behavior of a GPT model without needing to retrain it from scratch — simply by providing the right instructions and reference materials.
But wait — do LLMs actually understand anything?
If you ask me, that’s a better question for neuroscientists or philosophers. But in theory — as we've learned so far — technically they don’t "understand" like humans do — at least by the knowledge we have by until today. They’re incredibly good at simulating understanding.
You might be thinking, “Okay cool tech… but how does this relate to my work as a UX designer/researcher?”
Well, understanding how these models work helps you:
Get better responses
Choose the right kind of prompts
Train a custom GPT more effectively
What Is a Custom GPT?
Now that you know how LLMs and GPTs work, you can probably guess what a Custom GPT is.
It’s a personalized version of ChatGPT — but tailored to your voice, needs, and projects. It includes:
Custom instructions (how it should speak, think, and behave)
Uploaded files (UX research, templates, docs, reports)
A specific tone and personality
Optional tools (like code execution or image generation)
It’s like creating your own design assistant or researcher — one that knows your process inside and out.
And no, you don’t need to code. You don’t need a data science degree. You just need your materials and a little guidance.
How Can a Custom GPT Help UX Designers?
Finally we are here. This is where it gets exciting for us.
Your trained Custom GPT becomes like your teammate — it has the materials, the knowledge, and just enough intelligence to assist you in meaningful ways.
Whether you're:
Running interviews
Analyzing usability testing
Synthesizing insights
Writing UX copy
Brainstorming new ideas
It can:
Generate interview scripts based on your product and audience
Analyze raw quotes and cluster them into themes (think: empathy mapping)
Summarize and tag pain points using UX heuristics
Role-play personas and stakeholders
Help you write better UX copy
And honestly, that’s just the beginning.
How to Build a Custom GPT
You might be thinking: “Okay, but can I really do this?”
Yes — absolutely.
Thanks to tools like ChatGPT, creating a custom AI for your specific project is now accessible.
Important note: As of now, OpenAI only allows this feature for ChatGPT Plus users. But in the future, I’ll explore other ways as well. Stay tuned!
Here’s how to create one in ChatGPT:
Open your ChatGPT account
Click on Explore GPTs
On the top right, click Create
Click on Configure tab
You’ll be asked to define:
Name (e.g., “UX Buddy” or “The Flow Doctor”)
Instructions (How should it think, speak, respond?)
Files to upload (Templates, research guides, reports, etc.)
Tools to enable (like code interpreter or image generation)
This is where you “train” your GPT by telling it how to behave and feeding it with your research materials:
Your design/research process
Interview guides
Usability testing docs
Journey mapping templates
Your company's design principles
The more specific and relevant your training materials, the smarter your GPT becomes.
After that, you can save it — and start chatting with your own Custom GPT that knows your project inside out, and doesn’t rely on general web knowledge alone.
Once saved, your Custom GPT becomes your intelligent design partner — one that responds based on your materials, not just general web data. It’s smart, fast, and built around your process.
Coming Soon in AI in UX Studio
In future editions, I’ll be sharing more tips, resources, and real examples of how to fine-tune your Custom GPTs, avoid common pitfalls, and get the most out of your AI-powered design work.
Until then, Don’t forget to share this post, subscribe, and follow me if you found it helpful 💡
Resources:
1- Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., & Polosukhin, I. (2017). Attention Is All You Need (Version 7). arXiv. https://doi.org/10.48550/ARXIV.1706.03762
2- Stöffelbauer, A. (n.d.). How Large Language Models Work. From zero to ChatGPT | by Andreas Stöffelbauer | Medium | Data Science at Microsoft. Medium. https://medium.com/data-science-at-microsoft/how-large-language-models-work-91c362f5b78f
3- Lee, T. B., & Trott, S. (2023, July 27). Large language models, explained with a minimum of math and jargon. Understanding AI. https://www.understandingai.org/p/large-language-models-explained-with
Love it 😍
Subscribed - thanks for putting out great content! If you get a chance I’d love your feedback on an article or two of mine.