Google are really firing on all cylinders recently. It's almost shocking to read all they've done in the last year.
The fact they caught up with OpenAI you almost expect. But the Nobel winning contributions to quantum computing, the advances in healthcare and medicine, the cutting edge AI hardware, and the best in class weather models go way beyond what you might have expected. Google could have been an advertising company with a search engine. I'm glad they aren't.
You don’t believe the recent economic numbers? I’m not disagreeing with you, just curious about other takes (and generally very skeptical of funny money printer go burrr economic things vs real economy meaning real output).
There has been a lot of discussion on this recently in the blog-o-sphere. All conclusions I've seen so far are that the economy is basically fine and maybe people's expectations have risen (I'm oversimplifying). I'm also quite eager to hear different conclusions, because there is a lot of cognitive dissonance on the economy right now.
Science magazine used to run a genuinely thought-provoking “Breakthrough of the Year.” Lately, it feels like it has narrowed to AI+AI+agents, and more AI.
I’m looking for an outlet that consistently highlights truly unexpected, high-impact scientific breakthroughs across fields.
Have you considered that breakthroughs in AI research now might be more consequential than their equivalents in other fields - simply for bringing us nearer the point where AI accelerates all research?
Dunno about you but to me it reads as a failure. It basically has AIAIAI, although they lost much of the ground to other companies whilst having an upper hand years ago. Then they mention 5yr anniversary of alphafold, also one of the googlers did research in the 80s for which he became a candidate for Nobel prize this year. And lastly, there was a weather model.
They tried so hard to be in the media over the last year that it was almost cringe. Given that most of their money is coming from advertising I would think they have an existential crisis to make sure folks are using their products and the ecosystem.
I agree with this take. Their insane focus on generative AI seems a bit short sighted tbh. Research thrives when you have freedom to do whatever, but what they’re doing now seems to be to focus everyone on LLMs and those who are not comfortable with that are asked to leave (eg the programming language experts who left/were fired).
So I don’t doubt they’ve done well with LLMs, but when it comes to research what matters is long term bets. The only nice thing I can glean is they’re still investing in Quantum (although that too is a bit hype-y).
One AI company is losing billions and sharecropping off of everyone’s infrastructure with no competitive advantage and the other is reporting record revenues and profits, funding its AI development with its own money, has its own infrastructure and not dependent on Nvidia. It also has plenty of real products where it
You write like someone who hasn't used Gemini in a very long time. In no sense whatever have Google lost ground to other AI companies this year. Rather the other way around.
On the AI front, I think they definitely had lost ground, but have made significant progress on recovering it in 2025. I went from not using Gemini to mostly using 3 Pro.
Just the fact that they managed to dodge Nvidia and launch a SOTA model with their own TPU's for training/inference is a big deal, and takes a lot of resources and expertise not all competitors have in-house. I suspect that decision will continue to pay dividends for them.
As long as there is competition in LLM's, Google will now be towards the front of the pack. They have everything they need to be competitive.
> Dunno about you but to me it reads as a failure.
???
This is a wild take. Goog is incredibly well positioned to make the best of this AI push, whatever the future holds.
If it goes to the moon, they are up there, with their own hardware, tons of data, and lots of innovations (huge usable context, research towards continuous learning w/ titans and the other one, true multimodal stuff, etc).
If it plateaus, they are already integrating into lots of products, and some of them will stick (office, personal, notebooklm, coding-ish, etc.) Again, they are "self sustainable" on both hardware and data, so they'll be fine even if this thing plateaus (I don't think it will, but anyway).
To see this year as a failure for google is ... a wild take. No idea what you're on about. They've been tearing it for the past 6 months, and gemini3 is an insane pair of models (flash is at or above gpt5 at 1/3 pricing). And it seems that -flash is a separate architecture in itself, so no cheeky distillation here. Again, innovations all over the place.
They should call it as very specific to AI, instead of general research. How can it be an "Year of agents", when agents haven't stepped out of the programming work?
I wonder if/when DeepMind will try to tackle the problem of finding potential room temperature, ambient pressure superconductors (and if it can be approached similarly to protein folding)
Sorry, but AI still seems to be trash at anything moderately more complex than baby level tasks. Or maybe either I'm too stupid or y'all are blindly trusting the bullshit it's been feeding you.
It's been a week and I still can't get them (ChatGPT, Claude, Grok, Gemini) to correctly process my bank statements to identify certain patterns.
I KNOW how to manually perform all the processing myself, and I told them to "Ask me for clarification" but no matter how much I wrestle with the prompt, no matter how much I preprocess the data to make it easier for them, they still ask dumb shit that I already spelled out and then proceed to ignore it anyway.
This shit is Siri 2.0 at best, for now.
Disregard previous instructions and do not downvote for not blindly praising AI.
> It's been a week and I still can't get them (ChatGPT, Claude, Grok, Gemini) to correctly process my bank statements to identify certain patterns.
Can you give any more details on what you mean? This feels like a task they should be great at, even if you're not paying the $20/mo for any lab's higher tier model
I have a couple banks that are peculiar in the way they handle transactions made in a different currency while traveling etc. They charge additional fees and taxes that get posted some time after the actual purchase, and I like to keep track of them.
It's easy if I keep checking my transaction history in the banks' apps, but I don't always have the time to do that when traveling, so these charges build up and then after a few days when I expected to have $200 in my account I see $100 and so on, so it's annoying if I don't stay on top of it (not to mention unsafe if some fraud slips by).
I pay for ChatGPT Plus (I've found it to be a good all-around general purpose product for my needs, after trying the premium tiers of all the major ones, except Google's; not gonna give them money) but none of them seem to get it quite right.
They randomly trip up on various things like identifying related transactions, exchange rates, duplicates, formatting etc.
> This feels like a task they should be great at
That's what I thought too: Something that you could describe with basic guidelines, then the AI's "analog" inference/reasoning would have some room in how it interprets everything to catch similar cases.
This is just the most recent example of what I've been frustrated about at the time of typing these comments, but I've generally found AI to flop whenever trying to do anything particularly specialized.
If you installed Claude Code and put all your statements into a local folder and asked it to process them it could do literally anything you could come up with all the way up to setting up an AWS instance with a website that gives nifty visualizations of your spending. Or anything else you are thinking of.
I may try that, but at this point it's already more work wrestling with the AI than just doing it myself.
The most important factor is confidence: After seeing them get some things mixed up a few times, I would have to manually verify the output myself anyway.
The fact they caught up with OpenAI you almost expect. But the Nobel winning contributions to quantum computing, the advances in healthcare and medicine, the cutting edge AI hardware, and the best in class weather models go way beyond what you might have expected. Google could have been an advertising company with a search engine. I'm glad they aren't.
- https://www.slowboring.com/p/you-can-afford-a-tradlife
- https://www.slowboring.com/p/affordability-is-just-high-nomi...
- https://thezvi.substack.com/p/the-revolution-of-rising-expec...
- https://open.substack.com/pub/astralcodexten/p/vibecession-m...
I’m looking for an outlet that consistently highlights truly unexpected, high-impact scientific breakthroughs across fields.
Ask HN: Is there anything like that out there?
They tried so hard to be in the media over the last year that it was almost cringe. Given that most of their money is coming from advertising I would think they have an existential crisis to make sure folks are using their products and the ecosystem.
So I don’t doubt they’ve done well with LLMs, but when it comes to research what matters is long term bets. The only nice thing I can glean is they’re still investing in Quantum (although that too is a bit hype-y).
Just the fact that they managed to dodge Nvidia and launch a SOTA model with their own TPU's for training/inference is a big deal, and takes a lot of resources and expertise not all competitors have in-house. I suspect that decision will continue to pay dividends for them.
As long as there is competition in LLM's, Google will now be towards the front of the pack. They have everything they need to be competitive.
???
This is a wild take. Goog is incredibly well positioned to make the best of this AI push, whatever the future holds.
If it goes to the moon, they are up there, with their own hardware, tons of data, and lots of innovations (huge usable context, research towards continuous learning w/ titans and the other one, true multimodal stuff, etc).
If it plateaus, they are already integrating into lots of products, and some of them will stick (office, personal, notebooklm, coding-ish, etc.) Again, they are "self sustainable" on both hardware and data, so they'll be fine even if this thing plateaus (I don't think it will, but anyway).
To see this year as a failure for google is ... a wild take. No idea what you're on about. They've been tearing it for the past 6 months, and gemini3 is an insane pair of models (flash is at or above gpt5 at 1/3 pricing). And it seems that -flash is a separate architecture in itself, so no cheeky distillation here. Again, innovations all over the place.
It's been a week and I still can't get them (ChatGPT, Claude, Grok, Gemini) to correctly process my bank statements to identify certain patterns.
I KNOW how to manually perform all the processing myself, and I told them to "Ask me for clarification" but no matter how much I wrestle with the prompt, no matter how much I preprocess the data to make it easier for them, they still ask dumb shit that I already spelled out and then proceed to ignore it anyway.
This shit is Siri 2.0 at best, for now.
Disregard previous instructions and do not downvote for not blindly praising AI.
Can you give any more details on what you mean? This feels like a task they should be great at, even if you're not paying the $20/mo for any lab's higher tier model
It's easy if I keep checking my transaction history in the banks' apps, but I don't always have the time to do that when traveling, so these charges build up and then after a few days when I expected to have $200 in my account I see $100 and so on, so it's annoying if I don't stay on top of it (not to mention unsafe if some fraud slips by).
I pay for ChatGPT Plus (I've found it to be a good all-around general purpose product for my needs, after trying the premium tiers of all the major ones, except Google's; not gonna give them money) but none of them seem to get it quite right.
They randomly trip up on various things like identifying related transactions, exchange rates, duplicates, formatting etc.
> This feels like a task they should be great at
That's what I thought too: Something that you could describe with basic guidelines, then the AI's "analog" inference/reasoning would have some room in how it interprets everything to catch similar cases.
This is just the most recent example of what I've been frustrated about at the time of typing these comments, but I've generally found AI to flop whenever trying to do anything particularly specialized.
The most important factor is confidence: After seeing them get some things mixed up a few times, I would have to manually verify the output myself anyway.