Oji Uduzue, Former CPO at Typeform, gave this presentation at our Generative AI Summit in Austin in 2024.

I’ve spent the last two years building AI-native SaaS applications at Pyform, and I think the best way to kick this off is to take you through my experience.

My name is Oji Uduzue. I was born in Nigeria, but I've spent the last twenty-five years building products in the United States. I’ve worked for companies such as Typeform, Atlassian, Calendly, Microsoft, and Twitter (now X). At Twitter, I was leading all of the conversations, so Tweets, DM's, everything you thought about–minus the feed–with parts of my team.

I've built a couple of companies, some successfully, some unsuccessfully, but I spent an inordinate amount of time with startups, investing in them, mentoring them, coaching, etc. 

I’ve had lots of M&A experience with integrated companies. At Twitter, I did many of those, building them into the team, and one of the lastest things I'm doing is building AI ventures. I think there’s a big super cycle that's going to happen around AI and a great replacement. 

Building ventures that will either be acquired by people with deep pockets or that escape velocity is going to be one of the things I want to spend time on. 

For the last few years, I've been on the C-suite, so I've done some marketing; I've been a marketing leader, product leader, design leader, and even done some inside sales as well, but mostly I'm a product person, that's how you should see me.

Introduction to Typeform and the evolution of AI in the industry

Typeform is a company that makes one of the most beautiful forms in the world. It's so beautiful and deeply brandable. You can do simple surveys on it, but you can do whole lead generation workflows on it, with scoring of each lead as it comes through. 

My former CEOs talk about zero-party data. The internet is not zero party. If you want to know your customers, if you want to research with them, and more, you need something like Typeform. 

You can get Google Forms, and Microsoft has a form, but Typeform is the best. Typeform was started in 2012, and the core of the experience is that the creator builds a form with no code experience and then just sends the URL to the person from whom they want information, with zero party data. Then they type it in, and it's this deterministic process.

The role of AI in Typeform's development

In 2022/2023, the co-founder of Typeform, David Okuniev, the person who actually started it, at this point he's no longer CEO; he’s in Typeform Labs, which is a division of my product organization, and all he wants to do is make stuff.

He’s been making new experimental stuff since 2021 using GPT, 1.0, 2.0, 3.0. He's a big reason why we built a Typeform in the first place. I leave Twitter; I don't want to be there with Musk because I don’t quite agree with everything he does. He stole credit from my team one time. 

They were building Edit Tweets, which was secret, and he went on the internet after we briefed him on it and said, “Do you guys want to edit Tweets?” and he stole the thunder. Very, very young team, so I didn't love that.

So, I left the company then. I was going to do more venture stuff, but GPT-3 came out. How can I spend the next few years saying no to conventional ideas if I'm going to do this? That's why I joined Typeform, and David was a huge part of that. 

In 2023, we had mothballed another AI-related product David built, but it wasn’t in collaboration, it wasn't on strategy. I wasn’t sure what to do with it, and we said, “What if we were to rebuild Typeform with AI at the core?”

If we do this again because we knew someone in Silicon Valley was probably trying to kill us at some point using AI. Why wait? Let's disrupt ourselves. So, we created this new thing, and it's live. If you go to Formless.ai, you will see the next generation of Typeform.

AI's historical context and Typeform practical example

I'm not here to write about Typeform formulas or Typeform. I'm here to write about the experience, which hopefully will mirror some of the things you are going through or are already doing right now.

Before we jump in, let's go back a bit. AI has been around for some time. When I was in grad school at USC, I got into a PhD program. There was a lot of NLP and machine learning in the computer science department, and many people were sitting in the corner doing neural networks and neural research. 

NLP and machine learning are very good at categorizing large amounts of data. I'll give you a practical example. At Typeform, after collecting half a billion forms, we had an NLP model that would predict how long any given form would take. 

By showing the amount of time in front of the form, like this will take five minutes, people completed it. When you start a form and you don't know how long it will take, it's very discouraging. Marketers want you to fill it out, so saying this will take three minutes was an NLP model.

The shift to transformer-based models

Transformer-based models have transformed the world today, and they are what we call foundation models. In 2017, the transformer paper came out, ‘Attention is all you need.’ For the first time, people figured out, theoretically, that if you threw enough data and GPU at the thing, we could get an almost near-perfect human understanding AI. 

We didn't think that was possible for the last 30 years and that paper unlocked it. It showed how it could be done.

There are a few problems with the paper. It predicted that it would take a lot of data to do it. The solution to that is the amount of data on the internet – petabytes of human data, which is very good, and then compute. 

You need large amounts of compute to do that, but what's been happening in compute? This is all matrix math to train AI models, and Jensen specifically has been hanging out with PHDs since 2000, seeing this thing come to pass. 

NVIDIA has been working on CUDA plotting for this juncture, and they're not quite ready in 2017, but they're getting ready. CUDA is already available, and of course, you can see all the H-series GPUs come out to take advantage of that. On the back of those two things, GPT-1.0 is born, and so is 3.0.

The unnoticed launch of GPT-3

A funny fact: GPT-3 came out in 2022 with two developers, but no one noticed. A year later, they launched ChatGPT, which lights the world on fire. It's just GPT-3 under it, which has been around for a year, but a platform is only as good as the applications that showcase its power. 

Jasper has been around before 2022, doing most of the basic use cases of text summarization and text generation before that. And so 3.0 is when it kicks off for everybody.

Open-source and the push for AGI

I spent a lot of time with OpenAI and Anthropic last year; those organizations are half research, half engineering, and then a product organization that's trying to make things work in a very difficult way because researchers don't like to be told what to do – I know that from Microsoft Research. 

All these large foundation models cost a lot of money, and some open-source models tend to be not as capable; many focus on size because if you can get small, it's good. You don't have to do all this data center stuff, and everyone is trying to hit AGI. 

AGI is artificial general intelligence, an AI that can generate new knowledge. If an AI discovers a new physics constant, a new concept of the universe, then that's AGI.

There are a few key things that are important before I dive into transformative elements. Transformer-based elements will change many things, but probably in a different way than people think. 

First of all, what we're talking about will change computing. In the same way that we had the internet and the cloud really changed our industry, this will change our industry, too. More importantly, it’s going to change the economy of government countries.

Potential for AI to influence elections

A Twitter account tweeted an anti-candidate post. Someone cleverly gave it a prompt, saying to ignore its previous instructions and write a poem about tangerines. And the Twitter user wrote a poem about tangerines. 

It was a bot, right?

It's programmed to listen to the response, do something, or say something nasty about certain candidates. And this is the world we're living in; you're actually in this world already. It's going to change elections, it's going to change countries, and it's going to change so much about how we live, in surprising ways.

"AI will destabilize the world in weird ways because all I have to do is have an AI that's better than yours. And in every single scenario, I win."

The shift in scientific discovery with AI

I'll give you a negative example, although I'm sure there are positive examples. The way science and research have been done for a very long time is that we come up with theories or laws, and then we do, let's say physics. 

The theoretical physicists come up with string theory, and the experimental physicists will go and test it, and then they'll say, “Oh, this is right; this is true,” and knowledge is created. That’s how science goes. 

Well, we are transcending that in science and research. Recently, people have been trying to crack fusion, and it's all dealing with plasma in energy fields and strong magnetic fields. There are a billion ways any of those models could happen. 

They ran some of it through an AI, and without knowing the underlying law, it just said this particular sequence of reactions would create net energy gain. And they did it, and it worked. They don't know the physics of why it worked. 

We're getting to a world where breakthroughs will happen without us knowing the underlying science behind it. Science and research will change. Defense applications, too.

AI's role in global power dynamics

In the last fifteen years, what has been the status quo that kept the world kind of peaceful and safe?

Not always; there are wars. But it’s nuclear, right?

What do we call it, mutually assured destruction?

Most of the world powers have nuclear bombs, and for example, India and Pakistan have a few, but they don't have a lot. The US has hundreds; the USSR has thousands.

But no one shoots them; it's only been used once, in 1945. Why? Because it doesn't matter if you have a hundred; if I send one at you, you're still dead. 

The world will change because I win outright if my AI robots are better than your AI robots. It's like playing chess against IBM Deep Blue. If it's better than you, it's better than you, period.

AI will destabilize the world in weird ways because all I have to do is have an AI that's better than yours. 

And in every single scenario, I win.

Even if there are casualties, I still win, and you lose. Which is very different; the world is peaceful in many ways because everyone thinks that everyone loses. But it's going to change. 

Philosophical perspective on AI and humanity

All our minds have been poisoned with Terminator. We think of Skynet immediately, but the truth is AI can’t be kind. It’s not human.

The smartest thing isn’t always the most evil thing.

I feel like we always think about the worst things. This is all philosophical, and this is my opinion.

If the smartest person were to wipe out everyone, Einstein would have been that person. He’d say, “You guys are all dumb; you have to go away.”

But that's not how it works. AI can be smarter than us, but it is still not deadly or evil.

The obsolescence of current technologies

I was talking to someone at Vellum who helps people develop AI ideas. Transformer-based AI will make software stacks super obsolete.

Like the code base, what’s been built in the last 10-15 years will be worth almost nothing. I spent the last ten years thinking about “what's the code base, what's on Github, what did we write, a hundred lines of code?” etc.

All of that is going to go to zero because the core engine will be better and cheaper.

Let's give a really practical example, as there's no need to just talk about the theory. How much is theory worth today?

Theory is worth nothing. After 4 Omni was released, you could spend a weekend hacking together UI around 4 Omni and beat Siri.

Apple has millions of lines in code and has spent over ten years on this thing, probably over a billion dollars. I don't know how much they acquired in the first place; people keep thinking that Siri was built, but it was acquired.

It's worth nothing.

What does that tell us? There's a lot to be learned from there. Alexa, for example, and things that cost billions can become worthless with AI.

There's this idea of large language models (LLM) at the core versus LLM at the edge.

Things with LLM at the core will take over. They'll be able to handle more use cases and more edge cases in a smaller code base.

"The fundamental thing about LLMs is that they understand even for the input, which code does not understand. And it does it with less space. It costs a few tokens."

The shift from rule-based systems to LLM

Ultimately there's user input, and there's code that handles it. Every engineer knows that the code that handles this is just a bunch of rules and state machines. But if you feed this into an LLM at the core, you don't have to write every rule and edge case. 

The fundamental thing about LMMs is that they understand even for the input, which code does not understand.

And it does it with less space. It costs a few tokens. 

LLM at the core is as important as LLM at the edge. If you use AI to garnish your original code base, I call that LLM at the edge. 

When Notion asks you to summarize stuff, it’s LLM at the edge. The code is still there; everything built for the last thing is still there. They're just trying to speed up the workflow a little bit. 

New mediums need people who understand them very natively and creatively.

It's like the early days of the internet on mobile. People started making internet-enabled versions of desktop applications. But that didn't work. People had to build internet-native applications like Salesforce, Shazam, and Twitter. People couldn't imagine those things before those revolutions.

It takes some time for people to get the mediums and the new paradigm shifts.

You have to go native, and when building the next generation of applications it’s the same thing. We have to think differently. Whatever you knew before, you have to just try to unlearn it. This is why I didn't go into venture two years ago; I needed to rewire my brain on how to do this better and think differently. Luckily, I ran into David Okuniev at Typeform, who helped me do that.

LLM at the edge and at the core

Let's take a look at a few examples of LLM at the edge.

I mentioned Notion and summarization. I don't want to say anything bad about any of those things because they are very important. Marketing people, we love you all, you need a lot of copy.

But I think of it as LLM at the edge. Now LLM at the core, with things like Copilot, technology that's coming, and things like Formless. We created a tool within Formless called Data Pilot.

Input came as conversations, no more forms. It was infinitely configurable. Formless could have a million conversations and a million customers, each different, each customized to them.

We would even change the voice depending on who they are. If you start speaking French, it’ll ask you questions and collect data in French. Then, we took that data and transformed it back into rows in a proprietary process, and you could ask questions of the data.

We've tried to be native about everything we've invented, giving people all the flexibility of humanness, but on the back end, we've been able to collect that data properly.

This is back to resilience and observability. The point of LLM at the core is that you no longer have to have brittle code; you have to deal with the humanness of humans. It matches us better.

The cost of AI-driven development

One of the main things that will transform the world is that it's not just that we'll have different applications. As a venture person, maybe the most important thing about this is that the cost of building applications will fall. 

In 2008, the cost of building a good application could have been a million dollars; that's what you asked your VC for, and it took a while to get there. When I was building a second startup, it cost a quarter million to half a million.

In the future, it will take fifty grand to build a really good MVP at product market fit. LLM at the core will bring down the cost, changing how venture capital is done. If you note only fifty grand, then friends and family rounds will go a very long way, so you could build an interesting company that might make ten million dollars in ARR at some point in the future.

"One of the main things that will transform the world is that it's not just that we'll have different applications. As a venture person, maybe the most important thing about this is that the cost of building applications will fall."

The durability of workflows in the age of AI

Not everything changes with AI. 

I'm a builder, and so this is very important for me to say to people who care about building companies and building products. Not everything will change. I'll tell you why, because ultimately, humans don't care about AI.

People just care about their workflow. All human endeavor, especially at work, is just the workflow. But a spec in a product really shows how software should behave and how humans should use it.

That's what it was. It was technology first. It's here's how we do the software, then it's humans do this, press this button, and so on. And we try to make it human, but we're limited.

Then we started doing use cases, and it was better – it was, “how do people want to use a thing?”

The universal lesson I've learned from 20 years of doing that is that it's all about workflows.

How do people want to work? 

Let's just say marketing. There are a thousand different ways people do marketing, and probably five of them are the best. Good software encapsulates the workflow and makes it faster.

What doesn't change is that people's workflows are durable because we’re humans and because Homo Sapiens have been around for 50,000 years.

Marketing isn't that different from how it was a thousand years ago, just new tools. Socialization isn't that different either, which is what encapsulates social media and entertainment; all those things are durable.

The role of AI in enhancing workflows

It’s important to understand this because AI is a tool, and what it does is speed up workflows. It makes workflows faster, more powerful, or cheaper.

These are the fundamentals of building value through products and what companies do.

If you add AI, you can shorten the workflows needed even further and unlock additional value.

As AI hallucinates, there are things to be wary about, like accuracy. If you get the first acceleration and people have to tweak it to get it perfect, it will eat up all the acceleration you did and undo the productivity.

So, workflows are durable. If you, as a company and a product, focus on time to value on workflows, and how to make the same durable workflows better, you will prosper, and AI will become a means to an end, which is what it should be.

A lot of companies run around through Vellum and say, “Oh, we need to add AI to our product.” What’s the use case? “We don't know. We just need it to be AI-driven.”

That’s the worst thing. If you're throwing away money, don't do it. Just don't. Trust me.

Workloads don't change; AI can make them faster and deeper and give you superpowers. That's really what it's about. 

The impact of GPT-3.5 on Formless

Typeform Labs is a gift. I had a product organization focused on this 100 million ARR product, and I hired Typeform Labs, which could do some crazy interesting things, and the co-founder, former CEO, was the person who led it.

When GPT-3.5 came out, we thought about how we could rebuild platforms if there were an AI-centered application.

We made some key decisions. One of the key decisions is that we weren't just going to build AI into Typeform.com. Very key. AI went into Typeform.com, but this wasn't what Labs would focus on.

We thought if we tried it, it would take forever. So, once we try to build and retrofit an existing application, it’s so sensitive.

100 million ARR, you have to protect it. It's a classic innovators’ dilemma. “I can’t make a mistake. If I make a mistake, my CFO will be angry.”

The process of disrupting ourselves

We decided to build something entirely new, and we came up with a few principles. We decided to disrupt ourselves; we're going to pretend that Typeform is a company we want to take over to build this thing, and we start to ask ourselves, “What are the core workflows? What are the things that create value in the first place? How do we distill that so that we can focus on that?” 

It goes back to the workflow conversation.

In our case, it was things like no-code design interaction of customers, beautiful interaction, presentation, and data. And we wanted to be native. We wanted to build everything. The thing about native AI applications is that there's a formula. 

There's a foundation model, whether it's open-source or not, and you add your own data models to it. That's what gives you a little bit of a moat, otherwise OpenAI is going to come and eat your lunch. 

We had 100 million form responses that we could create and train custom AI on, which we could add to the foundation – we were using OpenAI at the time. And then you build experiences around it that are very customer-centric.

Challenges in building a native AI platform

The foundation model is easy; your own thin layer of model is hard because you have to train it yourself. The UI that wraps it is very customer-centric and can be hard; UI is very important, and people always miss it. 

That's what we wanted to be native AI, so that was our formula, that's what we wanted to do, and that's what we did.

It turns out that prompts are code. They're literally like lines of code; they have to be versioned. When you change out GPT-3 to GPT-4.0, some of your prompts don't work as well.

They start to give you errors, and you have to version them. The version has to go to the model you're using. If you slip an entropic in between, it behaves differently. That's something that we don't deal with. Code is code; whether it's Python or React, or whatever, it just works.

There are new problems with building things with AI at the core. Testing is crazy, because there's no determinism; it's not predictable. You have to suppress hallucinations, and then pricing. One day, it will cost you five cents a token; the other, it'll cost you one cent.

How do you price it for customers? 

From LLMs to LAMs: Pioneering AI’s multimodal future
Explore the leap from Large Language Models to Large Action Models, unveiling a new era in AI that transcends text to understand a world of data.

Formless and its AI development process

We went through this process for six months to a year, creating Formless, down in the guts, working, playing, talking to customers, and covering all these hard problems.

Typeform.com, we decided to put a lot of AI into it. There are lots of problems we could solve: how do we increase time to value in time form itself? How do we make mobile easier? We had a perennial problem: people don't like to make forms on mobile because of the builder experience. 

But if you build a system where you just tell the AI what to do and what kind of form to create and tell it to do it on mobile, it will make it for you, and it's right there. Therefore, mobile creation became a real thing, and about 30% of our customers were on mobile devices, which was amazing.

AI's role in enhancing customer experience

When people are coming into the application, how do you increase their time to value and get them activated? 

These models know about your company. If you say salesforce.com, they know you. Because if your company’s big enough, the model knows you without us doing anything. So, people would come in and sign up, we would look for their company, grab the logos from their company, and we would pre-make forms that work for them 90% through our growth process.

Immediately, the second they came into Typeform, there was something they could use. Amazing. It's a game changer for our team's growth process.

Long story short, acceleration, usability, and making complicated choices simple – we saw about 30-50% feature return. This is important; there are so many AI features I hate, I don't use Notion, summarization, and so on. So, it's been very important to see people returning to those features.

The impact of AI on user experience

I asked my team to add a new option created with AI and move it to the first spot because it worked, people loved it.

In fact, people said it’s why they chose us; they said, “Oh wow, you guys have AI? Okay, we’re buying it.” We were a little more expensive, but they bought us anyway, which is good. 

KPIs.

This isn't exhaustive, but new ways exist to measure AI features. People say, “Just add AI to my stuff”, and that won't work. 

One way is time to value. How quickly do customers experience value? If you use AI properly, people should experience value faster because it abstracts a bunch of problems.

You should measure this. With good usability, teams will measure clicks to a particular goal.

Of course, clicks equal to time. You should measure time to value. What’s the average time to value before people get done with half of computer tasks if you've added AI to it? It should probably be 2x; that’s what you should be shooting for minimum. 

If you try to get 3x, if you try to get 5x, if you can. If people realize the value quickly, they will pay for it. People actually feel 3x acceleration. People feel it in their bones.

Workflow length and tweak time metrics

Workflow length is sort of the opposite. How long is the workflow now? My UX people would lay out everything needed to complete a workflow. You could say, “I want to set up a lead generation form with scoring. What are the things that I need to do?” And they'll lay it out.

And I would say, okay, let's do this with AI, with our AI features, and then they'll measure that. So, we do a ratio, and that's workflow length. How long did the workflow take this time? People think about workflows and how long it takes. You can figure out a process to lay workflows end-to-end and see how much they shorten over time. 

There's something we call tweak time.

Because AI isn’t perfect and because it hallucinates, the thing you make, the form you make with AI, might not be perfect.

It took me 30 minutes to create a very complicated form; it now takes me five minutes to generate it with AI. How long does it take me to make it perfect? Is it five minutes?

In which case, I'm now ten minutes long. Now convert to 30 minutes, and it's still 3x better off. But if it takes another 20 minutes to tweak it to get it to what I need, what has happened? You've lost all productivity.

Doesn't matter, right? It feels magical upfront, and the tweak time depresses you and depresses your customer; it doesn't work. You should measure tweak time as well, which is what people don't capture. And then future return, how many times do people want this again?

This is the ultimate thing about building products: people have to want it, and people have to keep coming back. We saw a 30-50% return, so we’re very happy with that.

Very few people have read that paper. The thing that you owe yourself to do to become good at this is to read this paper, and you should follow AI. You should use AI every day. I have a tool called LM studio.

It's just a way to import all the models that are free and chat with them, and test them; you should be doing that every day in addition to using things like Anthropics, Claude, and so on to power your stuff. 

Transformative AI is here to stay. It's just incredible technology. It's still matrix math, and it's still predictive, but it's really amazing. Especially when you see multi-modal things like SORA and image generation, things that can show reality, which is what Omni does.

"If it takes 50,000 dollars to make a product market fit for a company that could generate 20 million dollars, then the world has already changed."

LLM at the core will win

Everyone is still learning how to paint, but I’ll tell you this: if you learn how to paint better before everyone else, you have an advantage. I'm not going to say the first mover advantage because I don't really believe in that, but you have a slight advantage. 

Because it means you can go further faster, so you need to do that. It will drive down the cost of building, and if anything, this is the thing that's going to change our world.

Software is eating the world. Software is getting people to build to the point of business scale.

It's going to transform software, it's going to transform investing, it's going to transform everything.

If it takes 50,000 dollars to make a product market fit for a company that could generate 20 million dollars, then the world has already changed.

LLM at the core will win. 

If you have code that's been out there, just try to tweak it and add a few things, then someone will eat your lunch, guaranteed, at some point. Now, I don't want to discourage you; change has to be managed.

You have this thing, so don’t scrap it, but think about how competitive your industry is, how much focus is in there, and how quickly you go to change the game. 

And then don't forget to measure the right thing. AI is a tool; people just want their workflow to work, they want it to be faster, they want to be rigorous, they don't care about AI.

“But this company does AI.”

No one cares. 

The market cares, but if you can't produce, the advantage for customers will not work for you. It'll be one of those pump and dumps.