January 3, 2024
Portfolio
Unusual

2023 AI Wrapped: The "so whats?" behind this year’s top AI moments

Sandhya Hegde
No items found.
2023 AI Wrapped: The "so whats?" behind this year’s top AI moments2023 AI Wrapped: The "so whats?" behind this year’s top AI moments
All posts
Editor's note: 

SFG 36: The "so whats?" behind this year’s top AI moments

In this episode of the Startup Field Guide podcast, Sandhya Hegde and Wei Lien Dang chat about everything that happened in the world of AI Startups in 2023. 2023 was very much an Act 2 for Generative AI as GPT became a household name. We had a long Act 1 during Covid ending with some incredible launches in the second half of 2022 including Stable Diffusion and of course ChatGPT. We take a closer look at what the pivotal moments have been this year and more importantly what they mean for the startups ecosystem.

Be sure to check out more Startup Field Guide Podcast episodes on Spotify, Apple, and Youtube. Hosted by Unusual Ventures General Partner Sandhya Hegde (former EVP at Amplitude), the SFG podcast uncovers how the top unicorn founders of today really found product-market fit.

Episode transcript

Sandhya Hegde

This podcast is all about AI, right? That has just been our focus area for so long now. It feels like it's almost been a decade, but the reality is it's only been a few years. And I think, generative AI in particular had a very long act one, wheret when the transformer tech first came out, it was only a few people in the ML community who were excited about it, who saw some of the opportunities and it was actually during COVID that, it was quite under the radar that GPT 3 launched and people started to work with it and build with it.

I feel like it, It only really came into full public consciousness, even in Silicon Valley, where, we like to believe we're all ahead on things, the end of last year in 2022, right? The second half with Stable Diffusion launching. With obviously ChatGPT launching. So it was this kind of incredible end to a very under the radar, quiet Act 1.

And 2023 has been this really intense second act with so many breakthroughs, ups and downs. It's been such an exciting year that I thought we really should recap it. What we're going to do this episode is, we're going to take each month of 2023 and pick what were what we think are the most pivotal breakthroughs that happened that are going to have a long term lasting effect on the technology industry and on the world in general.

And what our take is on what's clear and what's still not clear, what's important, et cetera. Why don't I kick things off Wei? Jan 2023. ChatGPT is two months old and has already crossed a hundred million users. It's become the accidental consumer company. What's your take on it?

Wei Lien Dang

I think it goes back to what you were saying, which is like the culmination of this long act one. And I think, what's so powerful is this combination of the fact that the models had finally gotten to a point where they were good enough in terms of performance to really be impactful across a number of different use cases.

In the whole MLOps era, it was around a specific set of use cases with models. It was recommendations, fraud, underwriting, things like that. And so you had this expansion in terms of what the models were capable of, but then you had this very accessible interface. I think, we all analogize to, for instance, it was the browser to the internet. It was the iPhone to mobile. And that's what I think unlocked the power of things like ChatGPT, like suddenly everyone knew about it. What's your take, Sandhya?

Sandhya Hegde

I think the immediate obvious thing was just how powerful the conversational human sounding interface was. Because everybody who used it attributed consciousness to this auto completing LLM product because of the alignment, because of the conversational output this product had. So the immediate and it was just so interesting to meet a lot of people outside the tech community in Silicon Valley who now knew what ChatGPT is, who had tried it and had still not heard of the company OpenAI.

I think that for me was just such a perfect representation of the power of putting this kind of accessible human interface, human-like interface on top of a technology that had been slowly maturing over the past few years and then things that I think became obvious later to me were one like, how do you think about go to market strategy when you have a very novel technology, have a model that you own? It does things that are not reliable yet. It's not very predictable, it's not deterministic, but it's very novel and interesting. And I think the idea that, you can build a consumer and an enterprise company in parallel when you have something unique like that. And one of those things will help you learn how to do the other.

I think that has now become something so many other full stack AI companies that own their own models, whether they started open source or not. I think that has become more of the norm now. And there are so many interesting takeaways and questions there. I think the other thing that became obvious over time is because of that ChatGPT moment the buyer for AI software changed.

It was no longer, like a mid level manager. It was actually the CEO of every company asking, Hey, how are we going to stay on top of this? What are we going to do? And, that is, we're still seeing the after effects of that when it comes to go to market strategy for AI startups.

Wei Lien Dang

Yeah, really kind of spans top to bottom. It became a C-level boardroom strategic discussion, but then now it's every rank and file developer, can potentially become an AI engineer and that's how organizations are thinking about it. Okay. So kick off the year, people are enamored with AI. There's all this attention on ChatGPT, and then you start to see some of the reactions and maybe arguably some of the controversy or backlash around, how this could potentially be used. And you start to see this spate of lawsuits against Stability, against OpenAI, concerns about copyright violations and usage.

It's the ecosystem's potentially Napster moment. And raises a lot of interesting questions. As you know, I have a ton of degrees, including a law degree and. It really seems to hinge on this question of, if you scrape the internet, if you look at all this copyright data, if you use it to train a model, is it actually misappropriating copyrighted work versus there's this notion of fair use and, in constructing these model outputs, these responses, these companies should be able to actually utilize them in terms of how to answer a question, how to give a user a new form of search capability and things like that.

I know you've been tracking it, what's your take and what do you think the implications are around some of these controversial topics?

Sandhya Hegde

Yeah, I think the there is a real chance, I think, first of all, let me take a step back. So I think as both investors, especially as startup founders, our tendency is to assume that the legal risk is low, right? Because we are optimists and we believe in kind of the disruptive power of a good idea. And we look at how Uber broke the rules, but still spread through this ecosystem, even though they were technically violating how the taxi system should work. So I think because of that history, we have a tendency to be a little too optimistic. And assume that these legal things will get figured out, we can just go build on top of it.

 And I think that sets us up for kind of a shaky foundation. So there's both opportunity and threat in this. I think the threat is that, you're building on top of something that could have a lot of legal issues. And you already see, I think, Microsoft has already said, Oh, Copilot like we will take the legal risk and they're trying to do everything they can to make it too big to fail and I really admire that strategy.

It's reminds me a lot of Uber. But then I think the opportunity is to figure out what the Spotify version of Napster is going to be, right? And I think there are, there's a buyer for that. There are companies that really specifically care about this. There are creators and artists and celebrities who see this as an opportunity to actually legally take advantage of the brand they have, the content they have created and monetize it through this new kind of channel that people can leverage to engage with them and use them. So I think both opportunity and threat, but it's going to take a while for all these stakeholders to get together and align over how a new system could work.

I think the good news is as the cost of training models comes further and further down that, it's not quite as terrifying to think about okay, what if we trained something from scratch with the right data sets, right? There is a future we could live in where it's much more achievable to just say, okay, we have figured out how to train models way more effectively.

So we can go do something that doesn't have all these issues anymore. I think that is not out out of question, and I'm excited about that future.

Wei Lien Dang

Yeah to, to me, I think the risk of the whole legal aspect is you don't want to stifle innovation and the delivery of a better product for everyone involved. Ultimately, the models are as good as the data it's trained on and, some of these, these creative works and so on.Super high quality, long form content that you want in a data set. And it does feel like the business incentives are powerful enough for all the parties to come to some type of agreement. The question is just like, how quickly does that play out? How does it play out exactly?

But I actually think it's in everyone's interest to figure out a solution.

Sandhya Hegde

Yeah, and I think even for founders there's an opportunity to really work on defining lineage, right? I almost think of it as Lineage as a Service, like how do you help people figure out, okay, who gets credit for the training data that went into creating something, and that presents us with a whole new opportunity, a whole new way for people to monetize their original creativity, and, it might be an exciting future for humans to live in.

That kind of, now brings us to March. For me, March, especially as someone who's working with a lot of applied AI startups, March was, the month of the GPT 4 launch. It was so powerful. The fact that it was multi-modal, I think, was a big sign of things to come and how to think about AI development. The fact that you have this one multi-modal API, I think, really opened a lot of people's minds on what's possible and what it means to be an AI engineer. But there was also this instant realization that the velocity with which you can build an AI application is now, gone from many months. First it was many years, then it's many months, and now it's two weeks. And I think March is when everyone started using the word GPT wrapper. That kind of became more public consciousness. Are you a real defensible company or are you just a wrapper on top of GPT? And to some extent, maybe we've taken that too far.

End of the day, like you want to build a product that people love, but the defensibility questions started getting asked when people saw how quickly a good engineer could build a solid application on top of GPT 4. And there were small signs of other things to come that I would love your take on.I remember this, there was this crazy Will Smith eating spaghetti video. And it felt like everyone was focused on LLMs. There was this kind of small sign that actually text to video was also seeing a breakthrough. And then there was AutoGPT. What was your take on AutoGPT?

Wei Lien Dang

Yeah, you also had the song like with AI-enabled Drake and the Weeknd, and so I think people did start to see the possibilities, these different modalities, Sandhya, and then you had,this framework like AutoGPT, which gave people this glimpse of an agentic AI future where you could build all these different agents and you could have them start to really enable different tasks and workflows.

You saw people, frankly, you saw people coming up though with use cases, like I'm going to order things on DoorDash and so on. And as people started to build on it, it was like, Hey, like this isn't yet reliable this is still super early, but you could see the potential. And I think it's actually one of the areas that I'm really excited about in terms of, how that model and how that approach plays out. I think a lot of people in terms of thinking about applied AI or different use cases, or, whether it's personal or business workflows see the possibility of how agents can be used.

And I do think giving developers at least easier tools for how to actually build these things. It's a good thing. And there's there's other sort of competing frameworks too that have come out, to, challenge AutoGPT but I think the net of it is like, Hey, this is still super early. But for me,it raises a question, which is not specific to agents. It's what's the killer use case? And I think people are still trying to think through that. It's cool if you can show like a POC of click, order, some food on DoorDash for delivery or something like that versus what's a really meaningful, meaningful, repeatable, workflow or use case that people have where agents can be utilized.

Sandhya Hegde

Yeah. I think the takeaway was just how excited people were about the possibilities, right? And I think as soon as we start seeing the first few signs of what's truly reliably deliverable and possible in enterprise, this is just something, a lot of developers will want to work on.

The fact that it's like the most starred project ever is just so fascinating. So that brings us to April. I can't believe we're only three months in. What was your take on what's happened in April and May?

Wei Lien Dang

Yeah, one of the big things around those months, Sandhya, is, as you highlighted OpenAI became this consumer company with ChatGPT. But then businesses, enterprises companies, really started to think about what they could do with AI. And you had some of these large infrastructure providersmove into the space.

You had Databricks launch Dolly. You had Microsoft announcing, Azure AI, Google with Vertex AI. Obviously, all this accrued to NVIDIA. But to me, the takeaway or the signal was that you had incumbent infrastructure providers who were not going to sit idle. And it was not just about the model.

Databricks with Dolly showed how you could actually very cost effectively release a new model. The cost of training was coming down. They acquired MosaicML. And to me that signaled hey, it's a lot more than just the model. And that's because I think when you're serving businesses and enterprises in particular, they need an on ramp, and they don't necessarily, want to start green field.

They're going to work with their existing vendors and infra providers to actually build. What was your take? Those are some of the highlights for me, across April and May.

Sandhya Hegde

The conversation quickly in our circles, shifted to, okay, what is going to be the incumbent strategy, right? As VCs, we're very used to saying yeah, sure, Google could build that, but here's why the startups will win. We, we're not afraid of taking on the larger companies. They are focused elsewhere. That was always the important phrase, right? They are focused on other things. The startups have an opportunity to capture this kind of new wave that the bigger companies are not focused on. And I think that word focus is so important because the big takeaway with Databricks, Microsoft, Google was really whether they are high growth enterprise software infrastructure companies, whether they are already public, whether they are the biggest companies in the world, this is what they are all focused on. This is the future that they are building towards. They're not going to just ignore it and let it pass them by. They are on it. They're in fact, reorganizing their companies around it, right from like CEO down to what type of developers they are trying to hire and retain.

That became very clear. I would say that was one. And hence, a good question for us is, of course, where do we think startups have an actual competitive advantage in a space where, you have the biggest companies in the world also reorganizing around this? And I think the second thing was the conversation around cost started becoming much more clear and practical, whether that's cost of inference, cost of training, cost of running, like the cost conversation suddenly became front and center, because everyone just accepted that the adoption is going to be there, right? Like the fact that in January, we crossed 100 million users, even though the enterprise use cases aren't quite all figured out, the fact that the world is like waiting for this tech to work, If it works, people will be there, they will buy it, they will spend their dollars on it.

That de-risked the entire market for these big companies, right? Now they no longer have to be cautious. They get to take all of their resources and say, okay, what's going to be highest ROI? Which means you need to figure out how to be super cost efficient as well. You can't just throw money at this. You need to now figure out the ROI model and figure out what the AI business is actually going to be. And it, in some way that actually brings me to June because I remember for me, June was all about most of our startup ecosystem, whether it's in our Cerebral Valley events, lunches, dinners, we were all talking about the H100 shortage.

We were like, oh, NVIDIA just became a trillion dollar business so they are so far the one true winner maybe other than OpenAI. And two, where are the H100s? If you're actually trying to train your own model right now, how do you get GPU access without having enough money to make a two-year commitment to some big cloud provider? That became a huge challenge for startups. We saw friends in the VC ecosystem starting to think about their GPU strategy, right? Literally buying GPUs and trying to make that their competitive advantage in terms of why founders should choose them. What was your take to that period back in 23?

Wei Lien Dang

Oh, yeah. I think the anxiety for startup founders around GPU access, accessing and getting their hands on GPUs is very real and continues to be, I think we could debate how long like the supply and demand curves look as they are and how long that continues.

But I think it's very real. And, part of that is, is I think the value prop of going back to these larger infra providers, they're better situated, to address that both for smaller and larger companies. I think the interesting thing is how quickly those folks have moved into AI and made it, center of focus across the entire organization. You can see it reflected in the share prices, the daily stock movement and so on for all these companies involved. Maybe one thing that caught me a little bit by surprise is the whole rise of these GPU clouds. And for some of them, frankly, like repurpose, like Bitcoin, blockchain, Web3 companies who had a existing supply of GPUs on hand.

 But I do think that there's a lot of market pressure and while NVIDIA is still in the lead it'll be interesting to see how how that's addressed over time. You definitely have a lot of folks, even startups thinking about how can we build new chips? You have some of the larger providers, who have done that or are thinking about that. So we'll see how that plays out, but it's definitely something that we're paying attention to. I think the other thing we're paying attention to is you definitely had, in maybe at the beginning of 2023, a lot of emphasis on models and what these models are capable of, especially, GPT 3, 3. 5, GPT 4, but then you had the emergence of something like Llama 2 in July. And, as I've talked a lot about this, the sort of rise of open source and how quickly that's happened in terms of, providing or enabling meaningful, viable challengers to the proprietary approaches.

I think that it's a little bit of a question, though, in terms of, so what what is someone going to take Llama to and do with it? Do people really want to fine tune their own models? Are they going to be wanting to run their own models, own their own models and what form factor would they want to be able to do that? Are they going to build ground up and stand up all the infra? Do they want to consume Llama 2, via a service? And so I think all of that's still playing out. Curious what you make of something like Llama 2 and maybe what you're hearing from people about what it means for them.

Sandhya Hegde:

Yeah I, I think when I talk to people more on the application side, there was a lot of excitement around fine tuning, right? For multiple reasons one, people thought that would make the model output a lot better, right? They just assumed as soon as I say the word fine tuning, I can assume that that will make the model better.

Obviously, the fine-tuned model has to be better than the non fine-tuned model because of X, right? That was almost like an assumption. People were very excited. I would say it didn't quite play out. I think there's a lot of nuance there. I know founders who actually started thinking about what are the places where fine tuning can make a difference. Where is it actually worth it? And hence, you should go fine tune Llama version 2 versus you know what? No, the thing that we want to do really well you know, maybe GPT 4, maybe GPT 3. 5 because it's like faster and cheaper. That is the right thing, right? I think this is when I saw strong engineering teams start actually being very result oriented for the first time and saying we are going to explore the few different models. We'll try Claude. We will try GPT 4. We will try fine tuned GPT 3. 5. We will try Llama version 2. We'll try a combination of these things and try to really evaluate the model output.

I think August was probably the time, July, August, when that started becoming like a good conversation to have, and more people brought up fine tuning than would actually get true value out of it. And they also realize when you fine tune a model, like you also lose some capabilities, like there's drift in alignment, there's drift in reasoning, you can't just assume it will get better. I would say the only exception to that is probably on the diffusion side where fine tuned models can be incredible, the models are small, you can fine tune them with very little training data and suddenly the output is magical in one specific direction, so very different reactions for the diffusion world versus LLMs when it comes to fine tuning.

And speaking of diffusion and moving to August I think for me that was the moment when I saw more founders also start paying attention to diffusion. Because I think the, with ChatGPT and AutoGPT there were just so many people hyper-focused on LLMs, and there was a lot less attention being paid to other modalities.There were fewer founders competing in speech and outside of Midjourney, image creation, video, and all of that started becoming more front and center, I think, in August. Midjourney crossed 15 million users. They were a, highly profitable, bootstrapped company.

And at the same time the United States justice system said, okay, copyright not valid for AI generated art. So there's this kind of moment happening where people are trying to figure out, okay, how do we reliably use AI generated art? So I think for me that was really what August was all about. I'm curious what's your takeaway? Have you played around with some of these voice generation, image diffusion, generation tools, at all?

Wei Lien Dang

Yeah, I think you're highlighting the importance of not bucketing all of these as the same thing in a more general way. There's been a lot of focus on LLMs, but there's all this activity, more specific to diffusion models. I think it's earlier for even other modalities, like whether it's like speech or video or things like that.

And we're seeing exciting companies in those spaces. So my take is that there's commonality across like all these different areas of foundation model development, but they're not created equal. And like with diffusion models, it was Midjourney, of course, but you also have Stable Diffusion, which has a more, solid open source project with the community around it, that's been around for longer. And you've seen how that has grown too. So I think it's like not a one size fits all applies to all of these,you do have to look at the different areas of model development. And then, of course, as models like GPT 4 and so on become multimodal some of that will start to bleed over into each other as well.

And then, in September, you had the GA launch of AWS Bedrock. And I think again, like in terms of how the cloud providers and the large infra providers are approaching this to me, the interesting thing is they've now partnered with a lot of the different model companies, they're partnered with Anthropic and Cohere, and they're also making available Llama 2 from Meta. And this is idea that, you could have a lot of choice if you're on one of those platforms And I think that it will put more pressure on infrastructure startups to think about how they can compete and how they can find the swim lane, when there's so much that's available from the cloud providers and, increasingly, will be the case.

Sandhya Hegde

 Yeah, I think the both threat and opportunity again, the threat of being commoditized, right? Like your end customer really thinks of their vendor as AWS. AWS gives them access to a plethora of APIs. It will maybe even route to the right model for the right use case. And you as the end customer don't have a direct relationship with the owner of the model.

I think that's definitely a threat for startups and a race to the bottom on pricing and cost. But the opportunity, I think, is, by September, everybody's talking about governance and safety and responsibility. The opportunity is, okay are those layers, which are all incredibly important, extremely fuzzy, not clearly defined yet, are those opportunities for innovation where you can differentiate, right?

Anthropic calls it constitutional AI. if your constitution requires these things, you can't just work with anybody through AWS. You need to pick your, model owner or maybe you need to work with startups that specialize in providing those services. So there's definitely some opportunity there emerging as well.

Wei Lien Dang

I think there's also this Geopolitical dimension too. I find companies like Mistral, pretty interesting. It's the OpenAI for France or Europe. There's an OpenAI for Japan. Like that provides some measure, I think of strategic defensibility.

But I think if you're at the infra layer, I agree with you, thinking about the supporting surrounding, arguably higher level services, maybe a better place to go compete in.

Sandhya Hegde

 Yeah. And you brought up open source. I think it was just October that the US government started asking questions about how safe is it to publish model weights and,very successfully freaked out the entire AI open source community. And I'm definitely team open source.

I think the idea that that these things can even truly be kept secret, that in itself is, I think a big question that if you try to make it secretive, I think all you will have is more corporate espionage. That's my take on trying to keep everything secret. I think having sunlight on these things is actually more valuable.I don't think the failure mode here is is conscious AGI that tries to kill humanity. I think the failure mode is that if we don't leverage this technology to democratize access to it. 

I think October was also really interesting because everyone had started asking the question of, like, where is the revenue, right? Yes, NVIDIA crossed a trillion dollars, great for them, like where are the other companies with real revenue, especially, for use cases that seem to have longevity and, so it's really nice to see GitHub Copilot crossing a hundred million in ARR and crossing like a million people, like million developers actually like paying to use it for better productivity.

So I'm also fascinated by the whole AI companion trend. I think the category leader there is definitely Character AI. They crossed 30 million monthly active use also in October, which is crazy and you can literally do a group chat, I don't know, with like Einstein and Jesus and have a conversation. I don't think that's the most popular use case, but it was fascinating to see how if you think about the technology adoption cycle, we have something that we haven't figured out, like regulation at all that is already being used by hundreds of millions of people. I think that just becomes more and more stark over time.

And obviously that brings us to November. I have so many thoughts and feelings over the whole OpenAI leadership episode. We won't rehash that. It's been discussed very widely in the media. I think the thing that stood out to me is just how fast it happened. It used to be that if you think about this happening in the days of Apple. It used to take a year or more for these kind of leadership changes to happen, to play out like it feels like just like everything else in the technology hype cycle where you like you,grow really fast and die equally fast. This change, the fact that it happened over a single weekend of someone being fired, of an entire board recomposing itself and Sam Altman joining the company back again. The fact that it happened over a weekend to me is just the most fascinating example of how we have sped things up in the software life cycle. And a lot of things we used to take for granted.

Oh, if something has millions of active users, of course, it's going to have a stable future is no longer something anyone should take for granted, right? There's a lot of curiosity and and a lot of unknowns. And I think this was like a perfect epitome of it. Really want to see what Project Q Star does. And, can I automate my job and stay home?I would be remiss to not call out the fact that this was also the month where NVIDIA announced the H200. Between the H200s, Project Q Star there's this sense of Oh, we are waiting to see what's going to come in 2024 already.

These are all small previews. What was your take on the OpenAI leadership

Wei Lien Dang

I was just glued to my phone, Sandhya, and, I think we were all trying to interpret like, the, the trickle of information, as it came out, I would actually say you pegged it, pretty spot on, in terms of guessing maybe what transpired. But I think to, to your point, it's yeah, this, like the whole ecosystem is in move fast and break things mode. And then the question is like, how do you balance governance? Whether it's in the context of the open AI board, whether it's in the context of the government getting involved and on that, I'm similar to you. Team open source, a big proponent of open source.

And I think what's confusing is it's all playing out and there's a lack of consistency. It's like what the White House and the U. S. put out is different from the E. U. and its act. It was different from the U. K. But I think the thing that kind of really, I think, raised some amount of alarm is at least with the executive order, it's this notion of like dual use foundation model and the risk that you disadvantage both smaller and younger companies, who are being really innovative.

As well as, this vibrant open source community that you have. And, I think that there's the risk that over regulation, plays into the hands of the OpenAIs and the large model companies and large tech companies. And I think that we need to guard against that, there's so much happening in the open source side of things, just in the span of the last year, that pace should not slow down.

In fact, I think people should invest more into it just because It enables such greater transparency and accountability, which are at the root of the governance that we want. So that's that's my take, but yeah, it's definitely a sort of a popcorn filled, weekend in terms of paying attention to what happened with OpenAI. And then, we're now in, in December and you have none other than Google, launching Gemini and taking OpenAI head on. And maybe this was somewhat obvious, but it's too big a market for OpenAI to to be the sole winner and people are going to put up legitimate challenges to it.

To me I think it depends on what they do with Gemini. I think that if you look at the track record of folks like Google Cloud, I would say on the whole, people have felt like it's underwhelmed or underperformed relative to its potential. And so can Google turn that around? They've been orienting around DeepMind and so on. But I'm curious in the near term, how do you see Act 2 playing out and ending?

Sandhya Hegde

I think Act 2 is actually going to be as long as Act 1, right? We are still in the early early, early acts of this of this play, so to speak. It's going to be an amazing decade, I think for us and, the tech ecosystem for founders, for VCs, for developers, it's going to be an incredible decade with so many new things being created for everybody to just have more joy out of their personal and work life.

So I think there's going to be a few combination of things, right? I think there'll be some breakthrough around how to really build models with copyrighted, not with legally copyrighted training data. I think there'll be more clarity there. There are so many bright minds working on figuring out, how do we really do this well, both on the image side, the video side, as well as the text side. I think we will see something there that will make it easier for big companies to not worry about copyright. I think we'll see a few of the cases in court play out. So I'm excited for that. I think the new modality breakthroughs that haven't had as much focus yet whether it's video, whether it's stuff like biochemistry. I think there's still room for the transformer tech to really push new breakthroughs there and I think more people will focus on those modalities now that the cost of working in them is coming down dramatically.

But on the, more pessimistic side, I would be shocked if we don't see some high profile failures. I think there's way too much dollars and way too much hype that has gone into AI. And I think next year it will start becoming clear what really had no shot of working, right? There will be companies that maybe should be returning money or trying to get acquired and you'll see the big cloud providers acquire a bunch of companies just for the technical talent and maybe a bit of the technology itself, but most definitely not for revenue. I think you'll start seeing a bit of that and by the way, nobody will be surprised, right? We all look at the dollars going into these companies and the valuations and say, okay, there's no world in which all of this makes sense. I think it will be more obvious what the limitations of the transformer architecture are, right? I think we've had all these breakthroughs by just throwing more and more data at it. It's seeing decreasing returns at this point, unless Project Q Star really surprises us all, and so I think the those limitations will become more clear as people do more research on where do they push the frontier and hopefully there'll be more conversation around stuff that really affects the world long term around cost and energy usage and safety and I think having good conversations about that stuff that does not get people too worked up and radicalized around oh, you are a AI doomer versus, and an accelerationist.

I think that conversation I find it unhelpful, I think we need to like all put our heads together to say what's the future we want to live in and have good conversations about things like safety and energy and, the changes in the job market and how do we create structure for all of that. And I think that's very much what I'm looking forward to in terms of the second act. Of course, the big unknowns for which I think almost nobody has answers for us, what are going to be some of the big long term repercussions? I'm curious, do you have some longer range predictions for us?

Wei Lien Dang

I think my take is a few things at least on infrastructure, I think infrastructure always evolves and gets shaped in the service of what are the killer use cases. To me, that's actually applied AI. If you look at all the interesting things whether it's AI native companies or incumbents augmenting with AI, I think right now you have very sort of general purpose AI infra, and that's going to change.

Whether it's for instance, people want more domain specific models or whether they decide to train their own, fine tune their own. So I expect a shift in that direction. I think that on the model side, especially the open source side, I think, yeah, you'll continue to have this long tail of models, but I think only a few will actually really matter.

I think things like, evaluation and tooling that helps people actually utilize more than one model, probably driven by a specific task will become more important alongside other services like security, safety, alignment, like all the things that if you think today like a mature software development life cycle or pipeline involves a couple of key functions to actually go ship an application into production.

Like we haven't seen that yet, at least not for most of the market or companies out there building around AI. And I think that you'll start to see that tooling come into play. So I'm super excited. I think it's going to be a really fun filled year with some surprises as well in store. But Sandhya, how do you see things playing out long term? I feel as venture investors, especially seed stage investors we do our best to see the longer term opportunities and how do you see that playing out?

Sandhya Hegde

Yeah, I've asked myself a lot, like, how do market caps shift? So I think, the very short high level take for me is there's going to be a big shift from services to software, right? Like a lot of the generative AI applications, the best way for them to be effective is to automate kind of repetitive grunt work or accelerate repetitive grunt work as opposed to replace original creativity and that means there's going to be a lot of shift of spend from services to software.

So overall, like the market cap of software. That's going to be one vector of total increase. It's just a shift from services spend to now software spend. And it will be more efficient, right? There will be a change in the margin structure of that particular value, the market share of GPU as a tool in society is going to go up like that's very obvious, but then there's I think data platforms. I think people who are in databases, whether that's data warehouses, data services, that space, I think, has such obvious untapped potential. None of us would say that two years ago. It felt like that was like already a very mature market. But now we realize that irrespective of whether you own a foundation model or you rent a foundation model, like the data platform and how you train and fine tune and kind of align this model like all of that is about data, right? And then how do you capture all the feedback and keep improving this. So the data platforms I think are going to be a even a larger industry, a larger kind of total market gap than we would have predicted a few years ago. And they were already a high growth category so I think those are some of the things that really come to mind.

And, of course, the big question for me as a SaaS investor is where is there opportunity to take market share from the massive incumbents, right? Whether that's a Salesforce or an Adobe, what is the strategy? Because they are not sleeping on it either. So what is the strategy that helps you really go attack some of these companies that have just seemed You know, infallible over the past decade, they are such massive businesses with great leadership.Yeah, I think that's really the big question. 

All posts

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

All posts
January 3, 2024
Portfolio
Unusual

2023 AI Wrapped: The "so whats?" behind this year’s top AI moments

Sandhya Hegde
No items found.
2023 AI Wrapped: The "so whats?" behind this year’s top AI moments2023 AI Wrapped: The "so whats?" behind this year’s top AI moments
Editor's note: 

SFG 36: The "so whats?" behind this year’s top AI moments

In this episode of the Startup Field Guide podcast, Sandhya Hegde and Wei Lien Dang chat about everything that happened in the world of AI Startups in 2023. 2023 was very much an Act 2 for Generative AI as GPT became a household name. We had a long Act 1 during Covid ending with some incredible launches in the second half of 2022 including Stable Diffusion and of course ChatGPT. We take a closer look at what the pivotal moments have been this year and more importantly what they mean for the startups ecosystem.

Be sure to check out more Startup Field Guide Podcast episodes on Spotify, Apple, and Youtube. Hosted by Unusual Ventures General Partner Sandhya Hegde (former EVP at Amplitude), the SFG podcast uncovers how the top unicorn founders of today really found product-market fit.

Episode transcript

Sandhya Hegde

This podcast is all about AI, right? That has just been our focus area for so long now. It feels like it's almost been a decade, but the reality is it's only been a few years. And I think, generative AI in particular had a very long act one, wheret when the transformer tech first came out, it was only a few people in the ML community who were excited about it, who saw some of the opportunities and it was actually during COVID that, it was quite under the radar that GPT 3 launched and people started to work with it and build with it.

I feel like it, It only really came into full public consciousness, even in Silicon Valley, where, we like to believe we're all ahead on things, the end of last year in 2022, right? The second half with Stable Diffusion launching. With obviously ChatGPT launching. So it was this kind of incredible end to a very under the radar, quiet Act 1.

And 2023 has been this really intense second act with so many breakthroughs, ups and downs. It's been such an exciting year that I thought we really should recap it. What we're going to do this episode is, we're going to take each month of 2023 and pick what were what we think are the most pivotal breakthroughs that happened that are going to have a long term lasting effect on the technology industry and on the world in general.

And what our take is on what's clear and what's still not clear, what's important, et cetera. Why don't I kick things off Wei? Jan 2023. ChatGPT is two months old and has already crossed a hundred million users. It's become the accidental consumer company. What's your take on it?

Wei Lien Dang

I think it goes back to what you were saying, which is like the culmination of this long act one. And I think, what's so powerful is this combination of the fact that the models had finally gotten to a point where they were good enough in terms of performance to really be impactful across a number of different use cases.

In the whole MLOps era, it was around a specific set of use cases with models. It was recommendations, fraud, underwriting, things like that. And so you had this expansion in terms of what the models were capable of, but then you had this very accessible interface. I think, we all analogize to, for instance, it was the browser to the internet. It was the iPhone to mobile. And that's what I think unlocked the power of things like ChatGPT, like suddenly everyone knew about it. What's your take, Sandhya?

Sandhya Hegde

I think the immediate obvious thing was just how powerful the conversational human sounding interface was. Because everybody who used it attributed consciousness to this auto completing LLM product because of the alignment, because of the conversational output this product had. So the immediate and it was just so interesting to meet a lot of people outside the tech community in Silicon Valley who now knew what ChatGPT is, who had tried it and had still not heard of the company OpenAI.

I think that for me was just such a perfect representation of the power of putting this kind of accessible human interface, human-like interface on top of a technology that had been slowly maturing over the past few years and then things that I think became obvious later to me were one like, how do you think about go to market strategy when you have a very novel technology, have a model that you own? It does things that are not reliable yet. It's not very predictable, it's not deterministic, but it's very novel and interesting. And I think the idea that, you can build a consumer and an enterprise company in parallel when you have something unique like that. And one of those things will help you learn how to do the other.

I think that has now become something so many other full stack AI companies that own their own models, whether they started open source or not. I think that has become more of the norm now. And there are so many interesting takeaways and questions there. I think the other thing that became obvious over time is because of that ChatGPT moment the buyer for AI software changed.

It was no longer, like a mid level manager. It was actually the CEO of every company asking, Hey, how are we going to stay on top of this? What are we going to do? And, that is, we're still seeing the after effects of that when it comes to go to market strategy for AI startups.

Wei Lien Dang

Yeah, really kind of spans top to bottom. It became a C-level boardroom strategic discussion, but then now it's every rank and file developer, can potentially become an AI engineer and that's how organizations are thinking about it. Okay. So kick off the year, people are enamored with AI. There's all this attention on ChatGPT, and then you start to see some of the reactions and maybe arguably some of the controversy or backlash around, how this could potentially be used. And you start to see this spate of lawsuits against Stability, against OpenAI, concerns about copyright violations and usage.

It's the ecosystem's potentially Napster moment. And raises a lot of interesting questions. As you know, I have a ton of degrees, including a law degree and. It really seems to hinge on this question of, if you scrape the internet, if you look at all this copyright data, if you use it to train a model, is it actually misappropriating copyrighted work versus there's this notion of fair use and, in constructing these model outputs, these responses, these companies should be able to actually utilize them in terms of how to answer a question, how to give a user a new form of search capability and things like that.

I know you've been tracking it, what's your take and what do you think the implications are around some of these controversial topics?

Sandhya Hegde

Yeah, I think the there is a real chance, I think, first of all, let me take a step back. So I think as both investors, especially as startup founders, our tendency is to assume that the legal risk is low, right? Because we are optimists and we believe in kind of the disruptive power of a good idea. And we look at how Uber broke the rules, but still spread through this ecosystem, even though they were technically violating how the taxi system should work. So I think because of that history, we have a tendency to be a little too optimistic. And assume that these legal things will get figured out, we can just go build on top of it.

 And I think that sets us up for kind of a shaky foundation. So there's both opportunity and threat in this. I think the threat is that, you're building on top of something that could have a lot of legal issues. And you already see, I think, Microsoft has already said, Oh, Copilot like we will take the legal risk and they're trying to do everything they can to make it too big to fail and I really admire that strategy.

It's reminds me a lot of Uber. But then I think the opportunity is to figure out what the Spotify version of Napster is going to be, right? And I think there are, there's a buyer for that. There are companies that really specifically care about this. There are creators and artists and celebrities who see this as an opportunity to actually legally take advantage of the brand they have, the content they have created and monetize it through this new kind of channel that people can leverage to engage with them and use them. So I think both opportunity and threat, but it's going to take a while for all these stakeholders to get together and align over how a new system could work.

I think the good news is as the cost of training models comes further and further down that, it's not quite as terrifying to think about okay, what if we trained something from scratch with the right data sets, right? There is a future we could live in where it's much more achievable to just say, okay, we have figured out how to train models way more effectively.

So we can go do something that doesn't have all these issues anymore. I think that is not out out of question, and I'm excited about that future.

Wei Lien Dang

Yeah to, to me, I think the risk of the whole legal aspect is you don't want to stifle innovation and the delivery of a better product for everyone involved. Ultimately, the models are as good as the data it's trained on and, some of these, these creative works and so on.Super high quality, long form content that you want in a data set. And it does feel like the business incentives are powerful enough for all the parties to come to some type of agreement. The question is just like, how quickly does that play out? How does it play out exactly?

But I actually think it's in everyone's interest to figure out a solution.

Sandhya Hegde

Yeah, and I think even for founders there's an opportunity to really work on defining lineage, right? I almost think of it as Lineage as a Service, like how do you help people figure out, okay, who gets credit for the training data that went into creating something, and that presents us with a whole new opportunity, a whole new way for people to monetize their original creativity, and, it might be an exciting future for humans to live in.

That kind of, now brings us to March. For me, March, especially as someone who's working with a lot of applied AI startups, March was, the month of the GPT 4 launch. It was so powerful. The fact that it was multi-modal, I think, was a big sign of things to come and how to think about AI development. The fact that you have this one multi-modal API, I think, really opened a lot of people's minds on what's possible and what it means to be an AI engineer. But there was also this instant realization that the velocity with which you can build an AI application is now, gone from many months. First it was many years, then it's many months, and now it's two weeks. And I think March is when everyone started using the word GPT wrapper. That kind of became more public consciousness. Are you a real defensible company or are you just a wrapper on top of GPT? And to some extent, maybe we've taken that too far.

End of the day, like you want to build a product that people love, but the defensibility questions started getting asked when people saw how quickly a good engineer could build a solid application on top of GPT 4. And there were small signs of other things to come that I would love your take on.I remember this, there was this crazy Will Smith eating spaghetti video. And it felt like everyone was focused on LLMs. There was this kind of small sign that actually text to video was also seeing a breakthrough. And then there was AutoGPT. What was your take on AutoGPT?

Wei Lien Dang

Yeah, you also had the song like with AI-enabled Drake and the Weeknd, and so I think people did start to see the possibilities, these different modalities, Sandhya, and then you had,this framework like AutoGPT, which gave people this glimpse of an agentic AI future where you could build all these different agents and you could have them start to really enable different tasks and workflows.

You saw people, frankly, you saw people coming up though with use cases, like I'm going to order things on DoorDash and so on. And as people started to build on it, it was like, Hey, like this isn't yet reliable this is still super early, but you could see the potential. And I think it's actually one of the areas that I'm really excited about in terms of, how that model and how that approach plays out. I think a lot of people in terms of thinking about applied AI or different use cases, or, whether it's personal or business workflows see the possibility of how agents can be used.

And I do think giving developers at least easier tools for how to actually build these things. It's a good thing. And there's there's other sort of competing frameworks too that have come out, to, challenge AutoGPT but I think the net of it is like, Hey, this is still super early. But for me,it raises a question, which is not specific to agents. It's what's the killer use case? And I think people are still trying to think through that. It's cool if you can show like a POC of click, order, some food on DoorDash for delivery or something like that versus what's a really meaningful, meaningful, repeatable, workflow or use case that people have where agents can be utilized.

Sandhya Hegde

Yeah. I think the takeaway was just how excited people were about the possibilities, right? And I think as soon as we start seeing the first few signs of what's truly reliably deliverable and possible in enterprise, this is just something, a lot of developers will want to work on.

The fact that it's like the most starred project ever is just so fascinating. So that brings us to April. I can't believe we're only three months in. What was your take on what's happened in April and May?

Wei Lien Dang

Yeah, one of the big things around those months, Sandhya, is, as you highlighted OpenAI became this consumer company with ChatGPT. But then businesses, enterprises companies, really started to think about what they could do with AI. And you had some of these large infrastructure providersmove into the space.

You had Databricks launch Dolly. You had Microsoft announcing, Azure AI, Google with Vertex AI. Obviously, all this accrued to NVIDIA. But to me, the takeaway or the signal was that you had incumbent infrastructure providers who were not going to sit idle. And it was not just about the model.

Databricks with Dolly showed how you could actually very cost effectively release a new model. The cost of training was coming down. They acquired MosaicML. And to me that signaled hey, it's a lot more than just the model. And that's because I think when you're serving businesses and enterprises in particular, they need an on ramp, and they don't necessarily, want to start green field.

They're going to work with their existing vendors and infra providers to actually build. What was your take? Those are some of the highlights for me, across April and May.

Sandhya Hegde

The conversation quickly in our circles, shifted to, okay, what is going to be the incumbent strategy, right? As VCs, we're very used to saying yeah, sure, Google could build that, but here's why the startups will win. We, we're not afraid of taking on the larger companies. They are focused elsewhere. That was always the important phrase, right? They are focused on other things. The startups have an opportunity to capture this kind of new wave that the bigger companies are not focused on. And I think that word focus is so important because the big takeaway with Databricks, Microsoft, Google was really whether they are high growth enterprise software infrastructure companies, whether they are already public, whether they are the biggest companies in the world, this is what they are all focused on. This is the future that they are building towards. They're not going to just ignore it and let it pass them by. They are on it. They're in fact, reorganizing their companies around it, right from like CEO down to what type of developers they are trying to hire and retain.

That became very clear. I would say that was one. And hence, a good question for us is, of course, where do we think startups have an actual competitive advantage in a space where, you have the biggest companies in the world also reorganizing around this? And I think the second thing was the conversation around cost started becoming much more clear and practical, whether that's cost of inference, cost of training, cost of running, like the cost conversation suddenly became front and center, because everyone just accepted that the adoption is going to be there, right? Like the fact that in January, we crossed 100 million users, even though the enterprise use cases aren't quite all figured out, the fact that the world is like waiting for this tech to work, If it works, people will be there, they will buy it, they will spend their dollars on it.

That de-risked the entire market for these big companies, right? Now they no longer have to be cautious. They get to take all of their resources and say, okay, what's going to be highest ROI? Which means you need to figure out how to be super cost efficient as well. You can't just throw money at this. You need to now figure out the ROI model and figure out what the AI business is actually going to be. And it, in some way that actually brings me to June because I remember for me, June was all about most of our startup ecosystem, whether it's in our Cerebral Valley events, lunches, dinners, we were all talking about the H100 shortage.

We were like, oh, NVIDIA just became a trillion dollar business so they are so far the one true winner maybe other than OpenAI. And two, where are the H100s? If you're actually trying to train your own model right now, how do you get GPU access without having enough money to make a two-year commitment to some big cloud provider? That became a huge challenge for startups. We saw friends in the VC ecosystem starting to think about their GPU strategy, right? Literally buying GPUs and trying to make that their competitive advantage in terms of why founders should choose them. What was your take to that period back in 23?

Wei Lien Dang

Oh, yeah. I think the anxiety for startup founders around GPU access, accessing and getting their hands on GPUs is very real and continues to be, I think we could debate how long like the supply and demand curves look as they are and how long that continues.

But I think it's very real. And, part of that is, is I think the value prop of going back to these larger infra providers, they're better situated, to address that both for smaller and larger companies. I think the interesting thing is how quickly those folks have moved into AI and made it, center of focus across the entire organization. You can see it reflected in the share prices, the daily stock movement and so on for all these companies involved. Maybe one thing that caught me a little bit by surprise is the whole rise of these GPU clouds. And for some of them, frankly, like repurpose, like Bitcoin, blockchain, Web3 companies who had a existing supply of GPUs on hand.

 But I do think that there's a lot of market pressure and while NVIDIA is still in the lead it'll be interesting to see how how that's addressed over time. You definitely have a lot of folks, even startups thinking about how can we build new chips? You have some of the larger providers, who have done that or are thinking about that. So we'll see how that plays out, but it's definitely something that we're paying attention to. I think the other thing we're paying attention to is you definitely had, in maybe at the beginning of 2023, a lot of emphasis on models and what these models are capable of, especially, GPT 3, 3. 5, GPT 4, but then you had the emergence of something like Llama 2 in July. And, as I've talked a lot about this, the sort of rise of open source and how quickly that's happened in terms of, providing or enabling meaningful, viable challengers to the proprietary approaches.

I think that it's a little bit of a question, though, in terms of, so what what is someone going to take Llama to and do with it? Do people really want to fine tune their own models? Are they going to be wanting to run their own models, own their own models and what form factor would they want to be able to do that? Are they going to build ground up and stand up all the infra? Do they want to consume Llama 2, via a service? And so I think all of that's still playing out. Curious what you make of something like Llama 2 and maybe what you're hearing from people about what it means for them.

Sandhya Hegde:

Yeah I, I think when I talk to people more on the application side, there was a lot of excitement around fine tuning, right? For multiple reasons one, people thought that would make the model output a lot better, right? They just assumed as soon as I say the word fine tuning, I can assume that that will make the model better.

Obviously, the fine-tuned model has to be better than the non fine-tuned model because of X, right? That was almost like an assumption. People were very excited. I would say it didn't quite play out. I think there's a lot of nuance there. I know founders who actually started thinking about what are the places where fine tuning can make a difference. Where is it actually worth it? And hence, you should go fine tune Llama version 2 versus you know what? No, the thing that we want to do really well you know, maybe GPT 4, maybe GPT 3. 5 because it's like faster and cheaper. That is the right thing, right? I think this is when I saw strong engineering teams start actually being very result oriented for the first time and saying we are going to explore the few different models. We'll try Claude. We will try GPT 4. We will try fine tuned GPT 3. 5. We will try Llama version 2. We'll try a combination of these things and try to really evaluate the model output.

I think August was probably the time, July, August, when that started becoming like a good conversation to have, and more people brought up fine tuning than would actually get true value out of it. And they also realize when you fine tune a model, like you also lose some capabilities, like there's drift in alignment, there's drift in reasoning, you can't just assume it will get better. I would say the only exception to that is probably on the diffusion side where fine tuned models can be incredible, the models are small, you can fine tune them with very little training data and suddenly the output is magical in one specific direction, so very different reactions for the diffusion world versus LLMs when it comes to fine tuning.

And speaking of diffusion and moving to August I think for me that was the moment when I saw more founders also start paying attention to diffusion. Because I think the, with ChatGPT and AutoGPT there were just so many people hyper-focused on LLMs, and there was a lot less attention being paid to other modalities.There were fewer founders competing in speech and outside of Midjourney, image creation, video, and all of that started becoming more front and center, I think, in August. Midjourney crossed 15 million users. They were a, highly profitable, bootstrapped company.

And at the same time the United States justice system said, okay, copyright not valid for AI generated art. So there's this kind of moment happening where people are trying to figure out, okay, how do we reliably use AI generated art? So I think for me that was really what August was all about. I'm curious what's your takeaway? Have you played around with some of these voice generation, image diffusion, generation tools, at all?

Wei Lien Dang

Yeah, I think you're highlighting the importance of not bucketing all of these as the same thing in a more general way. There's been a lot of focus on LLMs, but there's all this activity, more specific to diffusion models. I think it's earlier for even other modalities, like whether it's like speech or video or things like that.

And we're seeing exciting companies in those spaces. So my take is that there's commonality across like all these different areas of foundation model development, but they're not created equal. And like with diffusion models, it was Midjourney, of course, but you also have Stable Diffusion, which has a more, solid open source project with the community around it, that's been around for longer. And you've seen how that has grown too. So I think it's like not a one size fits all applies to all of these,you do have to look at the different areas of model development. And then, of course, as models like GPT 4 and so on become multimodal some of that will start to bleed over into each other as well.

And then, in September, you had the GA launch of AWS Bedrock. And I think again, like in terms of how the cloud providers and the large infra providers are approaching this to me, the interesting thing is they've now partnered with a lot of the different model companies, they're partnered with Anthropic and Cohere, and they're also making available Llama 2 from Meta. And this is idea that, you could have a lot of choice if you're on one of those platforms And I think that it will put more pressure on infrastructure startups to think about how they can compete and how they can find the swim lane, when there's so much that's available from the cloud providers and, increasingly, will be the case.

Sandhya Hegde

 Yeah, I think the both threat and opportunity again, the threat of being commoditized, right? Like your end customer really thinks of their vendor as AWS. AWS gives them access to a plethora of APIs. It will maybe even route to the right model for the right use case. And you as the end customer don't have a direct relationship with the owner of the model.

I think that's definitely a threat for startups and a race to the bottom on pricing and cost. But the opportunity, I think, is, by September, everybody's talking about governance and safety and responsibility. The opportunity is, okay are those layers, which are all incredibly important, extremely fuzzy, not clearly defined yet, are those opportunities for innovation where you can differentiate, right?

Anthropic calls it constitutional AI. if your constitution requires these things, you can't just work with anybody through AWS. You need to pick your, model owner or maybe you need to work with startups that specialize in providing those services. So there's definitely some opportunity there emerging as well.

Wei Lien Dang

I think there's also this Geopolitical dimension too. I find companies like Mistral, pretty interesting. It's the OpenAI for France or Europe. There's an OpenAI for Japan. Like that provides some measure, I think of strategic defensibility.

But I think if you're at the infra layer, I agree with you, thinking about the supporting surrounding, arguably higher level services, maybe a better place to go compete in.

Sandhya Hegde

 Yeah. And you brought up open source. I think it was just October that the US government started asking questions about how safe is it to publish model weights and,very successfully freaked out the entire AI open source community. And I'm definitely team open source.

I think the idea that that these things can even truly be kept secret, that in itself is, I think a big question that if you try to make it secretive, I think all you will have is more corporate espionage. That's my take on trying to keep everything secret. I think having sunlight on these things is actually more valuable.I don't think the failure mode here is is conscious AGI that tries to kill humanity. I think the failure mode is that if we don't leverage this technology to democratize access to it. 

I think October was also really interesting because everyone had started asking the question of, like, where is the revenue, right? Yes, NVIDIA crossed a trillion dollars, great for them, like where are the other companies with real revenue, especially, for use cases that seem to have longevity and, so it's really nice to see GitHub Copilot crossing a hundred million in ARR and crossing like a million people, like million developers actually like paying to use it for better productivity.

So I'm also fascinated by the whole AI companion trend. I think the category leader there is definitely Character AI. They crossed 30 million monthly active use also in October, which is crazy and you can literally do a group chat, I don't know, with like Einstein and Jesus and have a conversation. I don't think that's the most popular use case, but it was fascinating to see how if you think about the technology adoption cycle, we have something that we haven't figured out, like regulation at all that is already being used by hundreds of millions of people. I think that just becomes more and more stark over time.

And obviously that brings us to November. I have so many thoughts and feelings over the whole OpenAI leadership episode. We won't rehash that. It's been discussed very widely in the media. I think the thing that stood out to me is just how fast it happened. It used to be that if you think about this happening in the days of Apple. It used to take a year or more for these kind of leadership changes to happen, to play out like it feels like just like everything else in the technology hype cycle where you like you,grow really fast and die equally fast. This change, the fact that it happened over a single weekend of someone being fired, of an entire board recomposing itself and Sam Altman joining the company back again. The fact that it happened over a weekend to me is just the most fascinating example of how we have sped things up in the software life cycle. And a lot of things we used to take for granted.

Oh, if something has millions of active users, of course, it's going to have a stable future is no longer something anyone should take for granted, right? There's a lot of curiosity and and a lot of unknowns. And I think this was like a perfect epitome of it. Really want to see what Project Q Star does. And, can I automate my job and stay home?I would be remiss to not call out the fact that this was also the month where NVIDIA announced the H200. Between the H200s, Project Q Star there's this sense of Oh, we are waiting to see what's going to come in 2024 already.

These are all small previews. What was your take on the OpenAI leadership

Wei Lien Dang

I was just glued to my phone, Sandhya, and, I think we were all trying to interpret like, the, the trickle of information, as it came out, I would actually say you pegged it, pretty spot on, in terms of guessing maybe what transpired. But I think to, to your point, it's yeah, this, like the whole ecosystem is in move fast and break things mode. And then the question is like, how do you balance governance? Whether it's in the context of the open AI board, whether it's in the context of the government getting involved and on that, I'm similar to you. Team open source, a big proponent of open source.

And I think what's confusing is it's all playing out and there's a lack of consistency. It's like what the White House and the U. S. put out is different from the E. U. and its act. It was different from the U. K. But I think the thing that kind of really, I think, raised some amount of alarm is at least with the executive order, it's this notion of like dual use foundation model and the risk that you disadvantage both smaller and younger companies, who are being really innovative.

As well as, this vibrant open source community that you have. And, I think that there's the risk that over regulation, plays into the hands of the OpenAIs and the large model companies and large tech companies. And I think that we need to guard against that, there's so much happening in the open source side of things, just in the span of the last year, that pace should not slow down.

In fact, I think people should invest more into it just because It enables such greater transparency and accountability, which are at the root of the governance that we want. So that's that's my take, but yeah, it's definitely a sort of a popcorn filled, weekend in terms of paying attention to what happened with OpenAI. And then, we're now in, in December and you have none other than Google, launching Gemini and taking OpenAI head on. And maybe this was somewhat obvious, but it's too big a market for OpenAI to to be the sole winner and people are going to put up legitimate challenges to it.

To me I think it depends on what they do with Gemini. I think that if you look at the track record of folks like Google Cloud, I would say on the whole, people have felt like it's underwhelmed or underperformed relative to its potential. And so can Google turn that around? They've been orienting around DeepMind and so on. But I'm curious in the near term, how do you see Act 2 playing out and ending?

Sandhya Hegde

I think Act 2 is actually going to be as long as Act 1, right? We are still in the early early, early acts of this of this play, so to speak. It's going to be an amazing decade, I think for us and, the tech ecosystem for founders, for VCs, for developers, it's going to be an incredible decade with so many new things being created for everybody to just have more joy out of their personal and work life.

So I think there's going to be a few combination of things, right? I think there'll be some breakthrough around how to really build models with copyrighted, not with legally copyrighted training data. I think there'll be more clarity there. There are so many bright minds working on figuring out, how do we really do this well, both on the image side, the video side, as well as the text side. I think we will see something there that will make it easier for big companies to not worry about copyright. I think we'll see a few of the cases in court play out. So I'm excited for that. I think the new modality breakthroughs that haven't had as much focus yet whether it's video, whether it's stuff like biochemistry. I think there's still room for the transformer tech to really push new breakthroughs there and I think more people will focus on those modalities now that the cost of working in them is coming down dramatically.

But on the, more pessimistic side, I would be shocked if we don't see some high profile failures. I think there's way too much dollars and way too much hype that has gone into AI. And I think next year it will start becoming clear what really had no shot of working, right? There will be companies that maybe should be returning money or trying to get acquired and you'll see the big cloud providers acquire a bunch of companies just for the technical talent and maybe a bit of the technology itself, but most definitely not for revenue. I think you'll start seeing a bit of that and by the way, nobody will be surprised, right? We all look at the dollars going into these companies and the valuations and say, okay, there's no world in which all of this makes sense. I think it will be more obvious what the limitations of the transformer architecture are, right? I think we've had all these breakthroughs by just throwing more and more data at it. It's seeing decreasing returns at this point, unless Project Q Star really surprises us all, and so I think the those limitations will become more clear as people do more research on where do they push the frontier and hopefully there'll be more conversation around stuff that really affects the world long term around cost and energy usage and safety and I think having good conversations about that stuff that does not get people too worked up and radicalized around oh, you are a AI doomer versus, and an accelerationist.

I think that conversation I find it unhelpful, I think we need to like all put our heads together to say what's the future we want to live in and have good conversations about things like safety and energy and, the changes in the job market and how do we create structure for all of that. And I think that's very much what I'm looking forward to in terms of the second act. Of course, the big unknowns for which I think almost nobody has answers for us, what are going to be some of the big long term repercussions? I'm curious, do you have some longer range predictions for us?

Wei Lien Dang

I think my take is a few things at least on infrastructure, I think infrastructure always evolves and gets shaped in the service of what are the killer use cases. To me, that's actually applied AI. If you look at all the interesting things whether it's AI native companies or incumbents augmenting with AI, I think right now you have very sort of general purpose AI infra, and that's going to change.

Whether it's for instance, people want more domain specific models or whether they decide to train their own, fine tune their own. So I expect a shift in that direction. I think that on the model side, especially the open source side, I think, yeah, you'll continue to have this long tail of models, but I think only a few will actually really matter.

I think things like, evaluation and tooling that helps people actually utilize more than one model, probably driven by a specific task will become more important alongside other services like security, safety, alignment, like all the things that if you think today like a mature software development life cycle or pipeline involves a couple of key functions to actually go ship an application into production.

Like we haven't seen that yet, at least not for most of the market or companies out there building around AI. And I think that you'll start to see that tooling come into play. So I'm super excited. I think it's going to be a really fun filled year with some surprises as well in store. But Sandhya, how do you see things playing out long term? I feel as venture investors, especially seed stage investors we do our best to see the longer term opportunities and how do you see that playing out?

Sandhya Hegde

Yeah, I've asked myself a lot, like, how do market caps shift? So I think, the very short high level take for me is there's going to be a big shift from services to software, right? Like a lot of the generative AI applications, the best way for them to be effective is to automate kind of repetitive grunt work or accelerate repetitive grunt work as opposed to replace original creativity and that means there's going to be a lot of shift of spend from services to software.

So overall, like the market cap of software. That's going to be one vector of total increase. It's just a shift from services spend to now software spend. And it will be more efficient, right? There will be a change in the margin structure of that particular value, the market share of GPU as a tool in society is going to go up like that's very obvious, but then there's I think data platforms. I think people who are in databases, whether that's data warehouses, data services, that space, I think, has such obvious untapped potential. None of us would say that two years ago. It felt like that was like already a very mature market. But now we realize that irrespective of whether you own a foundation model or you rent a foundation model, like the data platform and how you train and fine tune and kind of align this model like all of that is about data, right? And then how do you capture all the feedback and keep improving this. So the data platforms I think are going to be a even a larger industry, a larger kind of total market gap than we would have predicted a few years ago. And they were already a high growth category so I think those are some of the things that really come to mind.

And, of course, the big question for me as a SaaS investor is where is there opportunity to take market share from the massive incumbents, right? Whether that's a Salesforce or an Adobe, what is the strategy? Because they are not sleeping on it either. So what is the strategy that helps you really go attack some of these companies that have just seemed You know, infallible over the past decade, they are such massive businesses with great leadership.Yeah, I think that's really the big question. 

All posts

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.