Google has been doing more R&D and internal deployment of AI and less trying to sell it as a product. IMHO that difference in focus makes a huge difference. I used to think their early work on self-driving cars was primarily to support Street View in thier maps.
There was a point in time when basically every well known AI researcher worked at Google. They have been at the forefront of AI research and investing heavily for longer than anybody.
It’s kind of crazy that they have been slow to create real products and competitive large scale models from their research.
But they are in full gear now that there is real competition, and it’ll be cool to see what they release over the next few years.
>It’s kind of crazy that they have been slow to create real products and competitive large scale models from their research.
Not really. If Google released all of this first instead of companies that have never made a profit and perhaps never will, the case law would simply be the copyright holders suing them for infringement and winning.
> It’s kind of crazy that they have been slow to create real products and competitive large scale models from their research.
It’s not that crazy. Sometimes the rational move is to wait for a market to fully materialize before going after it. This isn’t a Xerox PARC situation, nor really the innovator’s dilemma, it’s about timing: turning research into profits when market conditions finally make it viable. Even mammoths like Google are limited in their ability to create entirely new markets.
This take makes even more sense when you consider the costs of making a move to create the market. The organizational energy and its necessary loss in focus and resources limits their ability to experiment. Arguably the best strategy for Google: (1) build foundational depth in research and infrastructure that would be impossible for competition to quickly replicate (2) wait for the market to present a clear new opportunity for you (3) capture it decisively by focusing and exploiting every foundational advantage Google was able to build.
Ex-googler: I doubt it, but am curious for rationale (i know there was a round of PR re: him “coming back to help with AI.” but just between you and me, the word on him internally, over years and multiple projects, was having him around caused chaos b/c he was a tourist flitting between teams, just spitting out ideas, but now you have unclear direction and multiple teams hearing the same “you should” and doing it)
the rebuke is that lack of chaos makes people feel more orderly and as if things are going better, but it doesn't increase your luck surface area, it just maximizes cozy vibes and self interested comfort.
My dynamic range of professional experience is high, dropout => waiter => found startup => acquirer => Google.
You're making an interesting point that I somewhat agree with from the perspective of someone was...clearly a little more feral than his surroundings in Google, and wildly succeeded and ultimately quietly failed because of it.
The important bit is "great man" theory doesn't solve lack of dynamism. It usually makes things worse. The people you read about in newspapers are pretty much as smart as you, for better or worse.
I actually disagreed with the Sergey thing along the same lines, it was being used as a parable for why it was okay to do ~nothing in year 3 and continue avoiding what we were supposed to ship in year 1, because only VPs outside my org and the design section in my org would care.
Not sure if all that rhymes or will make any sense to you at all. But I deeply respect the point you are communicating, and also mean to communicate that there's another just as strong lesson: one person isn't bright enough to pull that off, and the important bit there isn't "oh, he isn't special", it's that it makes you even more careful building organizations that maintain dynamism and creativity.
Yeah people seem to be pretty poor at judging the impact of 'key' people.
E.g. Steve Jobs was absolutely fundamental to the turn around of Apple. Will Brin have this level of incremental impact on the Goog/Alphabet of today? Nah.
The difference is: Apple had one "key person", Jobs, and yes the products he drove made the company successful. Now Jobs has gone I haven't seen anything new.
But if you look at Google, there isn't one key product. There are a whole pile of products that are best in class. Search (cringe, I know it's popular here to say Google search sucks and perhaps it does, but what search engine is far better?), YouTube, Maps, Android, Waymo, GMail, Deep Mind, the cloud infrastructure, translate, lens (OCR) and probably a lot of others I've forgotten. Don't forget Sheets and Docs, which while they have been replicated by Microsoft and others now were first done by Google. Some of them, like Maps, seem to have swapped entire teams - yet continued to be best in class. Predicting Google won't be at the forefront on the next advance seems perilous.
Maybe these products have key people as you call them, but the magic in Alphabet doesn't seem to be them. The magic seems to be Alphabet has some way to create / acquire these keep people. Or perhaps Alphabet just knows how to create top engineering teams that keep rolling along, even when the team members are replaced.
Apple produced one key person, Jobs. Alphabet seems to be a factory creating lots of key people moving products along. But as Google even manages to replace these key people (as they did for Maps) and still keep the product moving, I'm not sure they are the key to Googles success.
In Assistant having higher-ups spitting ideas and random thoughts ended up in people mistakenly assume that we really wanted to go/do that, meaning that chaos resulted in ill and cancelled projects.
The worst part was figuring what happened way too late. People were having trying to go for promo for a project that didn't launch. Many people got angry, some left, the product felt stale and leadership&management lost trust.
Isn’t that what the parent is describing? “Ill and cancelled projects” <==> “luck surface area”, and “trying to go for promotion” <==> “cozy vibes and self-interested comfort”?
I'm in a similar position and generally agree with your take, but the plus side to his involvement is if he believed in your project or viewpoint he would act as the ultimate red tape cutter.
And there is absolutely nothing more valuable at G (no snark)
(cheers, don't read too much signal into my thoughts, it's more negative than I'd intend. Just was aware it was someone going off PR, and doing hero worship that I myself used to do, and was disabused over 7 years there, and would like other people outside to disabuse themselves of. It's a place, not the place)
Please, Google was terrible about using the tech the had long before Sundar, back when Brin was in charge.
Google Reader is a simple example: Googl had by far the most popular RSS reader, and they just threw it away. A single intern could have kept the whole thing running, and Google has literal billions, but they couldn't see the value in it.
I mean, it's not like being able to see what a good portion of America is reading every day could have any value for an AI company, right?
Google has always been terrible about turning tech into (viable, maintained) products.
What's striking is the sheer scale of Epstein's and Maxwell's scheduling and access. The source material makes it hard to even imagine how two people could sustain that many meetings/parties/dinners/victims, across so many places, with such high-profile figures. And, how those figures consistently found the time to meet them.
Their unreleased LaMDA[1] famously caused one of their own engineers to have a public crashout in 2022, before ChatGPT dropped. Pre-ChatGPT they also showed it off in their research blog[2] and showed it doing very ChatGPT-like things and they alluded to 'risks,' but those were primarily around it using naughty language or spreading misinformation.
I think they were worried that releasing a product like ChatGPT only had downside risks for them, because it might mess up their money printing operation over in advertising by doing slurs and swears. Those sweet summer children: little did they know they could run an operation with a seig-heiling CEO who uses LLMs to manufacture and distribute CSAM worldwide, and it wouldn't make above-the-fold news.
The front runner is not always the winner. If they were able to keep pace with openai while letting them take all the hits and miss steps, it could pay off.
Time will tell if LLM training becomes a race to the bottom or the release of the "open source" ones proves to be a spoiler. From the outside looking while ChatGPT has brand recognition for the average person who could not tell the difference between any two LLMs google offering Gemini in android phones could perhaps supplant them.
Indeed, none of the current AI boom would’ve happened without Google Brain and their failure to execute on their huge early lead. It’s basically a Xerox Parc do-over with ads instead of printers.
Not true at all. I interacted with Meena[1] while I was there, and the publication was almost three years before the release of ChatGPT. It was an unsettling experience, felt very science fiction.
The surprise was not that they existed: There were chatbots in Google way before ChatGPT. What surprised them was the demand, despite all the problems the chatbots have. The pig problem with LLMs was not that they could do nothing, but how to turn them into products that made good money. Even people in openAI were surprised about what happened.
In many ways, turning tech into products that are useful, good, and don't make life hell is a more interesting issue of our times than the core research itself. We probably want to avoid the valuing capturing platform problem, as otherwise we'll end up seeing governments using ham fisted tools to punish winners in ways that aren't helpful either
The uptake forced the bigger companies to act. With image diffusion models too - no corporate lawyer would let a big company release a product that allowed the customer to create any image...but when stable diffusion et al started to grow like they did...there was a specific price of not acting...and it was high enough to change boardroom decisions
Right. The problem was that people under appreciated ‘alignment’ even before the models were big. And as they get bigger and smarter it becomes more of an issue.
Well, I must say ChatGPT felt much more stable than Meena when I first tried it. But, as you said, it was a few years before ChatGPT was publicly announced :)
It was a surprise to OpenAI too. ChatGPT was essentially a demo app to showcase their API, it was not meant to be a mass consumer product. When you think about it, ChatGPT is a pretty awkward product name, but they had to stick with it.
Quibi would be if someone came in 10 years from now and said "if we put a lot more money behind spitting out content using characters and settings from Hollywood IP than we'll obviously be way more popular than a tech company can be!"
Quibi also got extremely unlucky in spending a bunch of money to develop media for people to watch on their commutes right before covid lockdowns hit. Wouldn't be surprised if some other company tries to make video for that market again and does well (maybe working with tiktok/shorts native creators)