Image Source: Invest Like the Best
Z Highlights
If there was a computer that could provide you with all the relevant background information, instantly making you smarter in a conversation, that would be a huge "capability upgrade." LLMs can rewrite content in real-time and extract information you need, which will be a huge liberation of human capabilities.
In this field, there are indeed switching costs, and indeed some moats, but ultimately, the winners will be the teams that continuously iterate faster and build better products. This field is changing too quickly to afford three months of complacency. Granola itself does have a certain "stickiness": the more context it has, the more helpful it becomes to the user.
This creates a tension: you have to choose between "building the next five useful features for users" and "making a big leap in innovation." We hope Granola isn't just a note-taking tool, but becomes the place where you do most of your work.
Chris Pedregal is the co-founder and CEO of Granola, an AI-powered smart meeting notes tool that is redefining how knowledge workers operate, helping users more efficiently record, organize, and recall key information from conversations. This article is a transcript of an interview between Patrick O'Shaughnessy, host of Invest Like the Best, and Chris Pedregal, broadcast in February 2025.
AI is the New Age Thought Tool
Patrick: Chris, I think a great way to start is to talk about your understanding of the concept of "thought tools," which are cognitive tools that technology has provided to humanity over the past few centuries. Obviously, you are building one of these tools. We'll delve deeper later, but when we first spoke, you used the x-y coordinate graph as an example to illustrate the value of such a "thought tool," and I was fascinated. Perhaps you can freely discuss your thoughts on this direction and why you are so captivated by it.
Chris: I love this topic. I think humans are essentially "tool-making animals," which is one of the key things that distinguishes us from other animals. Looking back at history, there have been many inventions that truly enable humans to do more. Some of these are explicitly "thought tools," such as writing and mathematical symbols. For instance, calculations with Roman numerals are very limited; it's difficult to calculate large numbers without an abacus. However, the decimal notation we use now allows you to easily perform long division with large numbers. Another one of my favorite examples is data visualization. You mentioned Playfair, William Playfair, about 200 years ago, who first graphically displayed data, allowing people to "see" data with their eyes.
Human evolution has allowed us to process images very efficiently, so mapping numbers to visual space, allowing people to intuitively perceive "whether this line is going up or down, faster or slower," is truly amazing—it wasn't done 200 years before I was born. These examples tell us that from mathematical symbols to writing, data visualization, and then computers, AI is the next stage, and it will become an even more powerful and useful "thought tool." What it will look like in the next ten or twenty years, who knows, but it certainly won't be what it is today.
Patrick: Perhaps you could talk about that transition. Briefly introduce the product you're building, and then we'll dive deeper. What was your reaction when you first saw LLM (Large Language Models)? How do you view the "paradigm shift" it can bring?
Chris: I think an interesting observation about "thought tools" is that their role is often to allow you to "externalize" what you need to remember in your brain. For example, one of the most common "thought tools" today is pen and paper. What you write down, you don't have to keep in your mind all the time; you can go back and look at it, think about it. Figuratively speaking, it's like extending your "memory." The "memory size" of our brains is determined by physiological structure, but these tools are like expanding our memory. The breakthrough of LLMs is that they can dynamically bring you the most relevant "context" when you need it. This context is generated based on your current needs. Just as writing down some ideas in a meeting can make you more organized, if there was a computer that could provide you with all relevant background information, instantly making you smarter in a conversation, it would be a huge "capability upgrade." And LLMs can rewrite content in real-time and extract information you need, which will be a huge liberation of human capabilities.
Patrick: How will this vision manifest in reality? Does it mean that everything in my life—everything I've read, every conversation I've had—will eventually be stored? And then I can feed the current context into a system, and it can provide me with insights or inspiration? Could you describe your vision in more concrete terms?
Chris: Before you ask me to talk specifically about Granola, I'll talk about the overall idea. In the world of AI, we can see the next two steps clearly, but it's hard to predict the world ten steps ahead. Granola is a "digital notebook," similar to Apple Notes on your computer, an application where you can take notes. But its difference is that it "listens." If you use it in a meeting, it records your written notes and also transcribes your conversation in real-time. After the meeting, it expands and optimizes the notes you wrote. So you don't have to record all information; you only need to record key insights and judgments. Let AI handle the mechanical, repetitive information recording. The significance of this is that when you review your notes, you can see the complete context of the meeting. We also have an unreleased feature that allows you to view all meeting notes with a specific person, or all meeting content under a certain topic, and summarize the commonalities and themes. This context would ordinarily be forgotten, even if you wrote it down, you wouldn't know which notebook it's in. Now it can become instantly available.
Patrick: Perhaps you can talk about your own experience as one of the earliest users, how Granola has changed your workflow? You mentioned that only 5% of the vision has been completed so far. What specific changes has this 5% already brought to you?
Chris: I think this will be the general state for knowledge workers—we constantly think, "What context do I need now to make the most informed decision?" People who use ChatGPT should be familiar with the concept of "context window," where you can input information to help the model understand the current situation. This "providing context" way of thinking will also be adopted by us humans. Let me give a concrete example: I'm going to write a blog post now. Before, I would open a notebook, write down ideas, and then type them into a document. Now, I'll first talk to a few people for their suggestions, record them (with Granola). Then I'll walk around, speaking out my thoughts, also recording them with Granola.
I put these materials into a folder, and then converse with the AI in Granola, asking it to extract themes and suggest formats. Ultimately, I still write the blog post, but this process allows me to efficiently integrate various opinions. Without these tools, I would definitely forget a lot of content. Another example is how we've observed a fundamental change in how Granola users take notes. They only write a few notes during meetings, and these are internal thoughts, such as "This person is a bit aggressive" or "I'm a bit concerned they didn't answer the question directly." These notes cannot be heard by AI, while objective content is handled by the transcription system. When reviewing these notes, users don't scroll through everything like before; instead, they directly ask the AI a question, such as "What did they say then?" and the AI quickly generates a high-quality answer.
Building Granola: Upholding Product Philosophy Amidst Change
Patrick: So, could you talk about your envisioned journey from 5% to 100%? We know the future of LLMs is unpredictable, but what do you think it will look like two or three steps from now?
Chris: I think the core question is: "What information do I need now to make the best decision?" You can imagine if you were a diplomat, before a critical negotiation, you would receive an "intelligence brief" tailored for that meeting. In the future, each of us will automatically receive similar information packages before every meeting we enter. The question is: what context information is useful? Is it the content of the last meeting? All your emails? Or all information on the internet? What will the interface look like? Granola's current goal is to help you generate the best meeting notes, but in the future, it should help you complete all post-meeting tasks, such as drafting emails, preparing investment memos, organizing events, etc., and these should all be 80%, 90%, or even 95% completed by AI in context. For us, a very important point is: we believe the role of AI should be to "augment humans," not replace them. You can use AI to replace a person, or you can use AI to augment a person—we choose the latter. We want humans to achieve more, be smarter, and be more efficient with AI. So we want Granola to take over low-value writing tasks, and you only need to add your own judgment, which is the most critical part.
Patrick: When do you think we'll be able to have in-person meetings recorded just like Zoom meetings? I already have that expectation. I hope to have a "memory assistant" that can accompany me to meetings, so I don't have to take notes manually. I'd even be willing to wear a recording pendant. I meet so many interesting people every day, I can't remember everything just with my brain. I usually only take frantic notes after meetings, but it never feels like enough.
Chris: Our iOS app will be launching soon. My co-founder Sam and I initially developed Granola because we needed it ourselves. We originally just thought it "might be useful," but we didn't expect it to take off so quickly. Once someone starts using Granola in important meetings, they are essentially "outsourcing" part of their long-term memory. They develop a dependency: they can always come back to look up past content. The emails that sadden us the most are those users who say exactly what you did: "A third of my meetings are in-person, but I feel completely 'naked,' I can't remember anything, I desperately need an in-person version of Granola." I'm talking about Granola now, but whether or not we develop it, I can confidently say: this type of tool will quickly become a common tool for everyone, and our "social habits" around meeting notes will also change.
Personally, I actually dislike the idea of an "invisible pendant" listening to everything, especially in Silicon Valley, which is part of some future vision, but I personally don't like it. For a work environment, I think a phone is very suitable. You place your phone on the table, and this is like a simple and clear social contract among attendees: everyone knows the phone is recording. This is exactly how we work at Granola. Basically, in every Granola meeting, if someone puts their phone down, everyone is very clear whose phone is recording. I think this "social contract" is very important. Whether to use a recording tool, like other things in a work environment, is decided by each individual. If you put out your phone and frankly explain its purpose, everyone benefits. And I do think this change will come faster than people expect. But in social circles, it's completely different.
Patrick: One thing I'm particularly curious about AI application companies right now is—some of the most powerful tools are being built by small teams of fewer than 25 people. And even as their user base and revenue grow rapidly, their team size hardly expands. They no longer need larger teams. Can you talk about what that experience is like in practice? Not just the product itself, but the process of building a company in this AI era compared to the "non-AI era" companies you've worked on before?
Chris: I think there are two defining characteristics of this current AI era: (1) the insane speed of technological progress; and (2) Granola is a product built on top of Large Language Models (LLMs), belonging to the application layer, thus we can directly benefit from the continuously updated technological dividends of the LLM layer. If we weren't building on foundational technologies like LLMs, we might need a very large team to achieve our current product. So we really benefit from this technology. That being said, the key to Granola becoming an excellent product lies in meticulously refining technical details, many edge cases you wouldn't normally think of, such as taking off your AirPods in the middle of a meeting that's happening in a multi-channel Zoom environment. Granola then needs very specific handling to ensure the entire experience isn't disrupted. You'd never think of this before building the feature. We use as many AI tools as possible internally at Granola, but at least in terms of development, some tools aren't mature yet. We are very close to "full automation," but still need to do a lot of manual work. I'm reluctant to predict a timeline, but if you fast-forward three years, I believe our current way of working will be completely overturned.
Patrick: So you're saying this is primarily an engineering challenge? Are you hoping to use Cognition, Cursor, or other tools to allow the team to simply give instructions like "managers" without actually writing code?
Chris: Exactly. Our CTO Vas has a clear goal: to minimize the amount of code each Granola engineer writes daily. He strives for this goal every day. We recently had an internal team building event, with the theme "Doing things with AI that you wouldn't expect." Let me give an example: I bought some shrimp in Spain and was going to grill them for everyone, but I'd never grilled shrimp before, so I opened ChatGPT and asked how to do it. Vas told me: "Don't just limit yourself to fragmented information; give it context." So I took photos of the shrimp and the grill. It turned out the AI said: these shrimp are already cooked, they just need to be heated! We actually hadn't understood the Spanish packaging and almost grilled them unnecessarily.
This example shows: AI usage habits are reshaping our intuition. Just as when the web first appeared, some people were used to looking up information in libraries, while a new generation instinctively searched Google first. AI is the same; in the future, there will be so-called "AI natives" who naturally know how to provide context and how to collaborate with AI. For me, I'm 38 years old, and my whole team is pushing me to improve. I'm someone who thinks about this issue every day, but I still don't use AI frequently enough. If even I am like this, imagine how far behind the general public must be.
Patrick: So does that mean "context capture" is a key issue? Your product is built around conversational context, which is an extremely important input method in work. How do you think we'll capture other forms of context in the future? Can you talk improvisationally about your understanding of "context capture"?
Chris: Capturing context—that is, collecting all data—is not difficult. It's only a matter of time before you can connect all your emails, notes, company documents, tweets, etc., to Anthropic or ChatGPT. But the real question is: for what I'm doing right now, which part of the context is "useful"? This might be a technical problem, or it might be a UI problem, I'm not sure. I do believe that the most critical issue currently limiting human-AI collaboration is the "interface." We're currently like in the early computer era, where it was a terminal interface—you typed a line of command, and the computer responded with a line. Our interaction with ChatGPT today is similar. I don't think the "chat" mode will disappear, but it will eventually seem very primitive. Your "control capacity" as a user is simply too limited.
I looked up history, and there's a good analogy: the earliest cars didn't have steering wheels; they used a lever to control left and right. It was fine at slow speeds, but once you drove fast, it was easy to lose control. It wasn't until later that the steering wheel was invented, which led to the precise control of cars we see today. I believe we still haven't invented that "steering wheel" for "collaborating with AI." Right now, we only have crude control—you say something, the AI responds, and then you respond again. The future should be a more fluid, collaborative interaction experience. We haven't built it yet, but it will definitely come.
Patrick: Can you specifically describe what this "more fluid interaction" would look like? How might it evolve compared to the current "Q&A" model?
Chris: It depends on the type of tool. But the feeling now is that you and the AI are not collaborating on the same "canvas," but rather working independently on separate canvases. To give a very basic example: when you use ChatGPT or Claude, you can't edit the AI's response. You can't go in and directly change what it wrote, like saying "This part is stupid, I want to rephrase it." You can only issue a command: "Please shorten it," and then hope it gets it right. In the future, we'll think this way of working is simply insane. In fact, there are similar examples in history. Early computer text editors were "modal": you had to enter an "insert mode" to type, then exit, and enter a "delete mode" or "copy mode." It wasn't until Larry Tesler initiated a movement that we got the non-modal editing we're familiar with today—where you can type, cut, delete, and paste seamlessly. This was unimaginable at the time. The "steering wheel" for the AI era hasn't been invented yet. But I guarantee it will be completely different from what we have now. In the future, we will have more granular control, faster collaborative methods, and the experience will become very fluid.
Patrick: Has there been any use case for Granola that particularly surprised you?
Chris: There are a few things that surprised me. One is the wide variety of uses. We initially built it for work meetings, but soon people told us: "My partner has cancer, and we have many consultations with doctors, and Granola has become indispensable in this process. I don't know what I would do without it now." Others use Granola for various "non-meeting purposes": for example, if I'm brainstorming an idea, I'll just create a "virtual meeting" and talk to Granola; or if I want to plan my day, I'll say all my to-dos to it and then let it help me prioritize; or if I'm watching a YouTube tutorial video, I'll listen and take notes in Granola. One of the biggest surprises is that users now less and less "read" old meeting notes, but instead directly open Granola's chat function and ask it: "What did Y say in meeting X?" and then get a high-quality answer.
AI Entrepreneurship Philosophy, Ambition, and Human Reflection
Patrick: As an application builder, how do you view the "attention and business battle" among model providers?
Chris: I think it's the best thing, it's great. I fully support this competition. We are building applications on top of foundational models, and the speed of model improvement in the past three years has been absolutely astonishing. Companies like ours benefit tremendously from it, and so do users.
Patrick: So how do you do it? Do you evaluate which model is better every morning and then hot-swap it? For example, if Anthropic releases a new model, do you switch to it and then switch back if a better one comes out in the future? Is that how it works?
Chris: Yes, that's basically the process you described. The evaluation process isn't simple, but that's indeed what we do: we don't just use one model in one place; different Granola features use different models, and we'll switch to the one that performs best on any given day.
Patrick: How do you view competition from another dimension? That is, in addition to model companies, other application developers might also build a "better Granola 2.0." Have you considered how your product architecture defends against this competition? Because when a product is "sticky" enough, users may not switch even if there's a better alternative.
Chris: I think the answer to this question is actually quite simple: you have to "build better products faster." In this field, there are indeed switching costs and moats, but ultimately, the winner will be the team that continuously iterates faster and builds better products. This field is changing too quickly to afford three months of complacency. Granola itself does have a certain "stickiness": the more context it has, the more helpful it becomes to the user. So if a user is going to switch to another product, that product has to be significantly better. But this shouldn't make you complacent; if you relax, you'll be caught up quickly.
Patrick: How does your team achieve such high-speed iteration? I've heard of some interesting engineering methods. As an AI application development team, how do you accelerate product development? What practices have been effective, and which have failed? How do you "engineer" speed?
Chris: We have a very clear approach, which is: explicitly define whether you are in "exploration mode" or "execution mode." If you know what feature you need to build, then enter "execution mode": build the minimum viable product as quickly as possible, set a timeline to launch it to real users (even if not all users), and iterate quickly. If you don't know how to solve a certain problem, then you're in "exploration mode": the goal at this point is to find the right solution path, not to launch quickly. In this mode, emphasizing speed can actually be detrimental—you launch a bad feature, thinking you did it quickly, but it hasn't truly solved the user's problem. Especially in this incredibly fast-paced field, occasionally pausing to seriously think about how to do the "right thing" is even more important.
For example: when we were building Granola, we were actually years behind the AI note-taking market. We spent a full year before launching, making us a "seventh-year entrant." During that time, we were constantly exploring. The initial interaction logic was completely different; for example, we originally wanted real-time keyword recording and automatic expansion, which was very cool. But we found that users couldn't concentrate at all; instead, they were distracted by the AI-generated content in real-time, completely counterproductive. Finally, we completely changed the interaction logic: during meetings, it's used like a regular notebook, and the magic happens after the meeting. This was actually a very difficult decision: something you spent six months building was rejected. But if we had launched early, we wouldn't have been able to pivot the product direction later. So I think, before determining the product direction, protecting the "ability to pivot" is crucial.
Patrick: How do you think about your "level of ambition"? On a scale of 1 to 10, where would you say you are? Has that rating changed since you started your company? How do you judge and adjust your ambition? What are your experiences?
Chris: I ask myself every day, "Are we doing this correctly?" When Sam and I first started experimenting with LLMs, we were convinced that all the work tools we use today would be rebuilt or reshaped by LLMs. We also believed that a new class of software would emerge, just as developers spend their days in IDEs like Cursor or Visual Studio. We believe there will be a new class of software, currently unnamed, that will become the "main workspace" for people like you and me who work around people and communication, projects, and meetings. This was our initial goal, and it's what we're doing now.
For us, a very critical question is: if you're not OpenAI or Anthropic, you have to be very good at a specific use case "right now"—you can't just build a product that "will be great in the future"; you have to be extremely valuable to users "today." This creates a tension: you have to choose between "building the next five useful features for users" and "making a big leap in innovation." We hope Granola isn't just a note-taking tool, but becomes the place where you do most of your work. If you need to write a document or a memo, Granola should be the best choice because it knows all your work content in context. But this is indeed a broad goal that requires a lot of iteration and effort.
Patrick: If you look back at some of the existing companies that might already be doing or will do some of Granola's functions, which companies would you pay the most attention to? In other words, if you were a VP at one of those companies, what kind of disruption would you be worried about?
Chris: I think you could worry about a million things, but you should choose what to worry about. Because very few things can actually affect you. Granola's most concerning "competitor" right now is the startup that hasn't been founded yet—the team that can start from where we and others have left off and execute faster than us. Those are the companies we pay the most attention to. I'm actually quite surprised at how quickly large companies have responded. After ChatGPT went viral, major tech companies immediately pivoted and adjusted their strategies, which I admire. But choosing to do something doesn't mean you can execute it well.
One of our investors put it very well: if you look back at all the AI features you use every day, how many were developed by large companies, and how many were made by startups? You'd find a surprisingly high proportion come from startups. Even though large companies pour in massive amounts of money, many breakthroughs still originate from startups. So, sometimes startups are like the "R&D department" for large companies; once something new is validated, the large company integrates it into its own system. And truly era-defining companies are those that see the trend early on and can leverage it to create massive impact.
Patrick: If I were to put on your "super dreamer" hat, completely disregarding practical feasibility, what "thought tools" do you dream of appearing five or ten years from now? We started this conversation with that topic.
Chris: I hope tools can make us "more human" and become "better people." That is, they should unleash our creativity and judgment, allowing us to do the beautiful things that only humans can do. Builders of AI tools must be very conscious of this, because the line is subtle—you can outsource all repetitive, boring tasks, but you really cannot outsource "judgment." For example, if you ask AI to generate a hundred ideas for you, and you choose from them, that's fine. But the danger is: when everyone does this, we are only looking at the options provided by AI.
This is just one example, but it permeates all areas. For instance, the idea that "writing is thinking." If you let AI write for you, behind that seemingly "mechanical" writing, there's actually a lot of your thought hidden. If you outsource too much, you'll lose that opportunity for thinking.
So the tools I dream of are ones that can break down information silos—currently, inspiration, knowledge, and data sources are fragmented, and when I think about a problem, I often only see one piece. The tool I want is one that can extract the most relevant insights from my personal background, combine it with all existing human knowledge, and present it to me dynamically and in real-time in a way I can understand. I don't know what this tool would look like, but I've seen a cool demo. A friend of mine made a system similar to Midjourney, where a microphone input, at 5-8 frames per second, projects relevant images in real-time based on what you're saying (slightly off-topic, to spark associations). He used it at Burning Man. But you can imagine its application in a work setting—helping you "think while speaking" while bringing in material you wouldn't have thought of.
But such tools are extremely difficult to make "helpful without being distracting." Sci-fi ideas often sound great, but many fail in reality due to "detail problems." For example, "real-time note-taking," sounds good, but actually distracts people. So, whether many tools are good or not is actually determined by "human user experience," not the technology itself. I could talk about this topic for hours. I truly feel that this is an exciting time, and being able to personally create these tools is truly a blessing.
Patrick: The investor Micky Malka has an art installation, exactly what you just described: in a meeting, as you speak, relevant images are projected onto the wall in real-time. It's distracting, but it's a "beautiful immersive experience." But in these six months, I've also seen astonishing technological progress. But the question is: what are these models "not doing well"? Everyone says models are good and will get even better. But are there some things they've consistently done poorly, and "haven't improved across generations"?
Chris: I think it's important to distinguish between "today's reality" and "future persistent limitations." What surprises me most right now is—models have almost no "personalization." You ask a question, I ask the same question, and the output is almost identical. After all these years, that's still quite surprising. We did one small thing at Granola that users really like. For example, if you and I attend the same meeting, your Granola notes will be completely different from mine. That's because we customize the notes based on what the user cares about most in that meeting. But many models still handle "personalization" very poorly, which surprises me.
Patrick: You've raised capital from many excellent investors and interacted with countless others. What advice do you have for VCs who are currently focused on AI investments? What's the most valuable way to collaborate? You can also draw on your experiences with the "best" and "worst" investors you've encountered to tell us what to do and what to avoid.
Chris: I'm not an investor, so I can't really "give advice." I can only share what approaches are attractive to me. As I said earlier—when you're building a feature, you must be clear whether you're in "exploration mode" or "execution mode." AI as a whole is in "exploration mode"; no one knows what the right direction is. Foundational models might have entered the "execution phase," but the application layer is still entirely in the exploration phase. In the exploration phase, you need to have a certain "product intuition," deeply thinking about what makes a good product and what benefits people. But not all investors can do this. What impresses me most are those who send cold emails but write with extremely specific insights—they might point out something we did right or wrong regarding a particular Granola feature or user behavior.
You can tell immediately: this person has thought about it. I take such people seriously. Because when a field is hot and information is overwhelming, filtering signals is extremely difficult. My inbox is almost overflowing now. And when I look for investors, I'm actually looking for a long-term partner—we need to share a common worldview and problem-solving approach. The specific execution details will change, but the core philosophy must be consistent. This might sound ordinary, but I've truly found this in my current investors. They are all people with strong product thinking and can engage with you on multiple levels—that's incredibly valuable.
Patrick: If I forced you to do a different project in this field—Granola ceases to exist, and you can't do Granola 2.0—what would be your first instinct to build? Let's talk in "exploration mode."
Chris: Before I started Granola, I was thinking about what I should do next. My previous company was an AI education app called Socratic, and everyone said, "Why don't you continue in the education direction?" But I had many reasons why I didn't want to do another education company or an AI product in the education space. But recently I played around with GPT-4's voice mode, the one with the "Scarlett Johansson-like voice."
Patrick: Hmm, I know that one.
Chris: I had my kids play with it; you can even turn on the camera. They actually played hide-and-seek with ChatGPT, which was pretty wild. My two kids, one 5 and one 7, would hide behind the table and then peek out, and the AI would say, "Oh, I see you!" This isn't a feature like "tutoring you to get good grades," but an unexpected human-computer interaction. I've never seen kids interact with technology like that. I don't know what the product would specifically look like yet, but there's definitely "opportunity" there. And I think interaction design is very crucial in such an experience.
Patrick: Where are the difficulties in the education sector? What did you learn from building Socratic? What warnings or encouragements do you have for founders entering this field?
Chris: The "holy grail" of education technology is to create true "1-on-1 tutoring." Many studies show that with a dedicated tutor, average students can achieve the top 5%–10% of their class results. Looking back at history, it's similar; many historical figures had tutors, like Alexander the Great's teacher was Aristotle, so they certainly had a head start. So everyone dreams of "everyone having a 1-on-1 mentor," and preferably free and open-source, so everyone can build upon it, benefiting society as a whole. But I don't want to do a commercial project in this area—because the "profit motive" and the "social ideal" don't perfectly align. And as you asked earlier, this field is easily swallowed by general AI assistants (like ChatGPT). I think most education apps will eventually be replaced by them.
Patrick: Do you think a successful AI tool "must" have a data advantage? For example, a unique data source, or personalized data that accumulates through users like yours? Is it possible for an AI application with almost no data to succeed, or is data absolutely key to sustainable development and competitive advantage?
Chris: You don't need much data now. The cost of acquiring small amounts of data is no longer high. In the era of machine learning, you needed millions of data points; now, sometimes 50,000 is enough. I'd think about which data is truly "unobtainable." Because almost anyone can build an application now, I think this ability will soon become widespread, and the real question is what the impact on the world will be.
I'm thinking of some historical analogies, like photography: initially, only a few people could take photos; later, equipment became cheaper but still required specialized learning; until now, everyone has a camera in their phone, everyone is a photographer. But in this context, "taste" has become even more scarce. If you can stand out in a vast amount of content, you might be more valuable than ever. I'm not sure if the software field will be the same, but I'm observing.
Patrick: I think this is like music. In the future, people are unlikely to all have their own completely private "personalized music." Because music also has "shared experience" and "social identification," like the wine tasting experiment: if you know wine is expensive, it tastes better; if you know a song is popular, you'll like it more. Software might be similar. I don't think everyone will build their own applications in the future, just as not everyone is an entrepreneur now, even though Stripe, Atlas, and cloud services have made starting a business extremely easy. But "entrepreneurship" is still not most people's choice.
So I think the future will often be like the past. We just have new tools. Everyone has different tendencies, but I'm excited to create new things with these tools. People like you are truly driving the realization of these possibilities.
I also want to ask another question; we talked about the "small team law" earlier. Do you think there will be companies in the future with only 20 people but a market capitalization of 10 billion dollars? Will Granola become a company with 1000 employees?
Chris: I think so. Take us, for example; we just hired our first customer experience person recently. We received a lot of user letters and interviewed many candidates. I'm convinced that in a few years, people will look at companies based on this standard: "Was their customer experience team built before 2025, or after?" Customer support teams built after 2025 will have fewer people, and the tools they use will be completely different. Old systems, however, are difficult to reform. Granola's ambitions are big; we'll need more people in the future. But the "big companies" of the future might be completely different from today. We read about enterprises with thousands, tens of thousands of employees; that's the old paradigm. Granola might seem "very small."