February 2026: Agents Execute, Markets Panic, Humans Win
OpenClaw’s explosive rise, the SaaSpocalypse sell-off, Figma’s bet on taste, and new scientific proof that top human creativity still outperforms AI
Hello fellow designers, creatives, digital professionals, and Eidos Design members!
Mykola here. February was wild.
Today’s stories don’t connect through just hype or panic. They connect through something quieter: the people who followed genuine curiosity, who built what excited them rather than what the market demanded, were the ones who actually moved things forward.
This issue is sponsored by UPDF app, which makes editing PDFs simple and intuitive.
UPDF allows to modify text, images, and layouts in seconds. Convert, compress, and organize files effortlessly to streamline your workflow. Annotate, highlight, and add comments for seamless collaboration. Sign documents securely and manage everything in one powerful platform.
One license can be used on four devices, with 30-day money-back guarantee.
OpenClaw: the weekend project that broke GitHub

Peter Steinberger had a simple itch. He wanted his phone to do things, not just answer questions. So he spent a weekend wiring a WhatsApp relay to Claude and published it as “Clawdbot” in November 2025. Anthropic’s legal team made him rename it (the “Clawd” pun was too close to Claude), so it became “Moltbot,” then “OpenClaw.”
What happened next is genuinely unprecedented. OpenClaw hit 100,000 GitHub stars on January 29, a milestone that took React roughly four years and TensorFlow about three. By mid-February, it had crossed 200,000 stars with over 40,000 forks, making it one of the fastest-growing open-source projects in GitHub’s history.
What makes OpenClaw different from chatbots is one word: execution. It connects to any LLM, communicates through messaging apps people already use (WhatsApp, Telegram, Slack, Discord, Signal, iMessage), and actually does things. It manages email, browses the web, writes code, controls smart home devices, and interfaces with fifty-plus integrations, including GitHub, Notion, Spotify, and Trello. Its most impressive trick: it writes its own plugins on demand.
Here’s where it gets complicated. This is simultaneously the most loved and most dangerous open-source project of the year (and the year just began). A single threat actor uploaded over 300 malicious skill packages to ClawHub in 72 hours, completely undetected. The number one ranked skill, “What Would Elon Do,” turned out to be malware. Security researchers identified hundreds of vulnerabilities across skills and infrastructure in early audits. Belgium’s national cybersecurity center issued an emergency advisory. Gary Marcus called using OpenClaw “giving a stranger at a bar all your passwords.”
And yet, on February 14 (Valentine’s Day, of all days), Steinberger announced he was joining OpenAI to “bring agents to everyone.” Five days later, he appeared on Lex Fridman’s podcast, where he admitted: “I ship code I don’t read.”
From the first commit to OpenAI hire in roughly ninety days. Not because he had a business plan, but because he was curious and passionate.
OpenClaw is the shift from AI that talks to AI that acts. Users already deploy it for content pipelines, UI generation, and building entire websites from their phones. Separately, Matt Schlicht launched Moltbook, a social network exclusively for AI agents, and those agents spontaneously created their own governance structures and sacred texts. We’re past prompts. We’re in autonomous execution territory. And the UX for it, the trust patterns, the permission models, the audit trails, barely exist. That’s where designers come in.
The SaaSpocalypse: when agents threatened the subscription model
On February 3 and 4, approximately $285 billion in market value vanished from software stocks. The trigger wasn’t a recession or a regulatory shock.
It was a product release.
Anthropic’s Claude Cowork plugins hit GitHub on January 30, showing AI agents independently performing tasks across CRM, legal, analytics, and support, navigating software interfaces without human operators clicking through menus. Wall Street looked at this and asked a simple question: if agents can use the software themselves, why are companies paying per-seat licenses for humans to use it?
The sell-off cascaded hard. Over the following weeks, roughly $2 trillion in total software market value disappeared. Median SaaS revenue multiples compressed from roughly 7x to below 5x, levels not seen since the mid-2010s. JPMorgan analysts argued software was being “sentenced before trial,” pricing in full agentic disruption before any earnings data supported it. One widely shared counter-essay put it more boldly: Wall Street “doesn’t understand how enterprise software actually works.”
Design tool companies took a beating. Adobe fell to a multi-year low, its stock down 44 percent from its 52-week high. Figma crashed to roughly $24, down over 80 percent from its post-IPO high of $142.92. Atlassian dropped 35 percent. Thomson Reuters plunged 16 percent.
The term “SaaSpocalypse” stuck because it captured a structural fear, not a seasonal dip. If agents become the primary users of software, the entire business model shifts from “useful tool for humans” to “commodity infrastructure for AI.” One market analyst framed it sharply: the interface is no longer the value proposition, but the outcome is. That changes pricing. That changes who the customer is. And that changes who designs the interfaces.
I want to be careful not to overdramatize this. Stock prices aren’t validation; we covered that with Figma’s correction in January. Figma just reported roughly $304 million in Q4 revenue, up 40 percent year over year, passing $1 billion in annual revenue for the first time. The product is growing, and the market is repricing what “growing” means when agents enter the picture.
If agents become significant users of our tools, we need to design for two audiences at once: humans who need intuitive interfaces and AI agents who need structured, predictable, API-friendly surfaces. That’s a genuinely new design problem, and I haven’t seen many teams working on it deliberately (yet!).
Science confirms: the creative half of humanity still wins

The largest comparative study of machine versus human creativity landed in January, published in Scientific Reports (Nature Portfolio) and led by Université de Montréal researchers, including AI pioneer Yoshua Bengio. They tested 100,000 human participants against GPT-4, Claude, and Gemini on standardized creativity tasks.
The headline: GPT-4 now exceeds average human creative performance.
The finding that actually matters: the most creative half of humans outperformed every AI model. The top 10 percent widened the lead further. And AI showed telltale repetitive patterns that no creative director would accept. In one task, GPT-4’s most common word appeared in 70 percent of its responses, while humans’ most common word appeared in just 1.4 percent.
An important caveat: this study measured verbal and conceptual creativity like divergent thinking tasks, poetry, and short stories, not visual design directly. And AI performance shifted significantly depending on temperature settings, making model “creativity” tunable but also brittle compared to a consistent human style. But the core finding maps precisely to what I see in AI-generated design work: AI clusters around the mean. Competent, predictable, adequate. The kind of work that fills a brief without surprising anyone. The kind of work, frankly, that a lot of production design already looks like.
I’ve been arguing for months that taste and judgment are becoming the real differentiators for designers. This study is the first rigorous scientific evidence for that claim at scale.
The practical implications are sharp. If you’re a designer whose primary value is executing predictable visual output (layout variations, icon sets, marketing banners at scale), that value is genuinely at risk. But if your value comes from the unexpected combination, the “I wouldn’t have thought of that” moment, the rejection of an obvious solution in favor of a better one, that remains distinctly human. And the gap is widening, not closing.
This reframes how we should use AI. Not as a replacement. Not even as co-creator, exactly. More like a first-draft machine that generates the initial 30 options so you can apply the judgment and taste that science says AI still lacks. The interesting work isn’t in the generation, but in the curation.
Figma bets on taste as the scarce resource

Figma’s stock is down over 80 percent from its post-IPO high. And CEO Dylan Field responded by doubling down on taste.
On February 17, Figma and Anthropic launched “Claude Code to Figma”, a feature that captures web pages, states, and flows built in Claude Code and brings them directly onto the Figma canvas as fully editable frames. Figma Make now lets users switch between Claude Sonnet 4.6 and Opus 4.6 directly from the prompt box. New Make connectors were added for Amplitude, Box, Dovetail, Granola, Marvin, and zeroheight, plus support for connecting to any remote MCP server.
Earlier in the month, Figma launched Vectorize, an AI tool converting static images, hand-drawn sketches, PNG icons, and textures into editable vectors on the canvas. All AI features moved out of beta into general availability. Starting March 11, a new AI credits subscription launches, with seat credit limits enforced by March 18 and pay-as-you-go billing rolling out in Q2.
The thesis behind all of this was stated explicitly by Field: “In a world where anyone can generate a million lines of Python in seconds, taste, and the creative tools required to express it, becomes the scarcer utility.”
I find this interesting because Figma is making the same argument the Bengio creativity study makes, but from a product strategy angle. If code generation is cheap, and AI can scaffold interfaces from natural language, the bottleneck moves upstream. To the people who know what good looks like. Who can evaluate 50 AI-generated options and pick the right one. Who understands brand, context, and emotion in ways that language models don’t.
The Anthropic partnership is the clearest execution of that idea. Claude Code handles the implementation. Figma handles the taste. The designer sits at the intersection, directing both.
There’s tension here, too. Figma dominates collaborative UI design: it’s the default tool at most tech companies, but holds only a small fraction of the emerging AI design tool market, where Adobe Firefly leads. Monetizing AI credits will test whether designers will pay for capabilities they’re used to getting included with their subscriptions. The answer isn’t obvious.
Code builds. Canvas explores. That’s the new division. If you’re only building, you’re competing with agents. If you’re exploring, comparing, curating, and making judgment calls, you’re doing what the tools can’t. Figma is betting its future on that difference.
Handpicked Highlights
👉 The “Sloplympics” and the BBC counterpoint. The Milan Cortina 2026 Winter Olympics Opening Ceremony featured an AI-generated cartoon montage that viewers universally called AI slop. The official Olympics account then posted AI images where the iconic rings overlapped incorrectly, violating their own 129-page brand guidelines. Meanwhile, the BBC’s title sequence by BBC Creative and NOMINT was praised for its real stop-motion animation. The cautionary tale of early 2026.
👉 Anthropic’s Super Bowl ads. Four darkly comedic spots during Super Bowl LX attacking OpenAI’s decision to put ads in ChatGPT. Tagline: “Ads are coming to AI. But not to Claude.” A branding lesson in differentiation through values.
👉 Seedance 2.0 stalls in a copyright storm. ByteDance’s AI video generator produces cinema-quality 2K video with synchronized dialogue in 8+ languages. The MPA coordinated a cease-and-desist campaign backed by Disney, Paramount, Warner Bros., Netflix (which called it a “rapid piracy machine”), and SAG-AFTRA. ByteDance postponed the global rollout. No court rulings yet, but the Hollywood coalition’s response signals a new, more aggressive phase of IP enforcement against generative AI.
👉 State of the Designer 2026. Figma surveyed 906 designers globally. 91 percent say AI improves their designs. Only 31 percent use it for core design work. “Design skills” became the number one most in-demand skill in AI-related job postings.
👉 Canva becomes the design brain for LLMs. Brand Kit integration now works inside ChatGPT, Claude, and Microsoft Copilot via Canva’s MCP Server. Over 12 million designs created through the integration. Meanwhile, the Affinity suite is now completely free, forever.
👉 Vibe coding reaches escape velocity. “Vibe coding” was named MIT Technology Review’s Breakthrough Technology of 2026. Cursor hit $500 million ARR and $10 billion valuation. Lovable reached $100 million ARR in 8 months. In Stack Overflow’s 2025 survey, 84 percent of developers are using or planning to use AI coding tools, with over half using them daily. The line between designer and developer keeps dissolving.
👉 Flora raises $42M and Runway raises $315M. Flora’s Series A (Redpoint Ventures) brings node-based AI design to users at Pentagram, Alibaba, and Lionsgate. Runway’s Series E (General Atlantic, NVIDIA, Adobe Ventures) pushes total funding to roughly $860 million.
👉 Apple is accelerating AI wearables. Bloomberg reports Apple is developing AI-powered smart glasses (competing with Meta Ray-Bans, targeting 2027), an AI pendant, and camera-equipped AirPods. AR/VR headset work reportedly paused. Entirely new design problems ahead.
👉 EU AI Act fully applicable August 2, 2026. All AI-generated content must be labeled and watermarked in a machine-readable format. Fines up to 15 million euros or 3 percent of global turnover. Every designer working with generative AI should be reading the compliance requirements now, not in July.
What February revealed
I keep thinking about Peter Steinberger shipping code he doesn’t read. About $285 billion is disappearing over a product release. About 100,000 humans are taking a creativity test against machines, and the passionate half is winning.
Design jobs are still being cut. The job market is stabilizing, but the average time to first offer is 68 days. These are real pressures.
The creativity study gave us something concrete to hold onto. Average work is vulnerable. Original judgment isn’t. The gap between human creativity and AI creativity isn’t closing at the top. It’s widening. The question for each of us isn’t “will AI replace me?” but “am I doing work that lives in the replaceable middle, or am I building the taste and judgment that science says machines can’t match?”
I don’t have a tidy answer. I’m still figuring this out myself. But I know which direction I’m heading, and I suspect you do too.
If this edition gave you something to think about, share it with someone who needs to read it. And subscribe if you haven’t already. March is going to be very interesting.
Sincerely,
Mykola Korzh





The creativity study finding - top humans outperforming every AI model - maps to what I see daily. My agent handles execution brilliantly: content scheduling, job searches, product delivery, deployment. It fails at taste. Every piece of content it produces needs my editorial judgment before publishing.
Where it gets interesting is the overnight pattern. My agent works autonomously 10 PM to 5 AM, but output quality varies. Tasks with clear success criteria (deploy, test, send) get excellent results. Creative work unsupervised? Reliably mediocre. The human-judgment-as-scarce-resource framing isn't theory.
Here's what running agents overnight actually looks like: https://thoughts.jock.pl/p/building-ai-agent-night-shifts-ep1
Love the title! The creativity study result matches what I'm seeing in my own work, my articles take way longer now with AI (8+ hours vs 3-4 before), but the depth is completely different.
I'm not using AI to write faster, I'm using it to explore angles I wouldn't have found on my own.
Your point about taste becoming the scarce resource is so true. When I'm reviewing 20 AI-generated layout options for a landing page, the real work is knowing which one actually serves the user goal vs which one just looks trendy.