Greg Brockman (President of OpenAI) also said that OpenAI is around 80% close to achieving "AGI", but it was disclosed that his stake in OpenAI is worth around 30BN.
So what does the true definition of "AGI" actually mean? It depends on who you ask.
It appears to many to mean "A Great IPO" or "A Gigantic IPO" at this point rather than "Artificial General Intelligence" which has been clearly hijacked to mean something else.
That's the trick right? What do they really mean by AGI. Depending on how narrow you go, it sounds like we've already achieved it. However, if they keep saying they'll achieve it and not defining it before making such statements that determine what it is, they can keep saying it endlessly to create hype.
One key thing I've heard about AGI which I think would be the most determining factor for me is a model that learns on the fly. Which could be done one way or another, but when you consider that LLMs basically run like "ROM" files, it makes it a little complicated.
I think we need to re-imagine how LLMs are built, train, and run. But also, figure out how to drastically lower the cost of running them.
""Artificial General Intelligence" which has been clearly hijacked to mean something else."
I mean, the goalposts shifted. The game Go used to be considered to require true AI. Passing the turing test. Scanning, analyzing and improving complex codebases largely on their own would have been considered some sort of AGI by me 6 years ago.
Now sure, we all know they lack true understanding. But it gets blurry at times what that does mean.
But I don't buy that there will be a magic point, where self improving AGI explodes towards singularity. The current approach is very, very energy and compute intense and that is unlikely to change.
Maybe the dystopian AI development will result in energy funding and advancements that actually benefit most of us. I really hope all this turns out in a net positive for humanity. If we wont get true "AGI", which we are far far away from, we at least could make some advancements in different areas.
Such suspicious phrasing lol. So you’re saying Paul Graham and his wife Jessica have 800 MILLION dollars worth of OpenAI stock, and that’s not so significant?
My understanding is dang has said in the past they do some anti moderation(I’m sure he has a better term) for posts related to ycombinator. That is to say they moderate less and might, do not quote me here, even boost a tad. So upvoted story by a well reputed source even without many comments is likely to hang onto the front page for a bit.
"Less" doesn't mean "not at all", of course—that would be too big a loophole. But it does mean strictly less, and we stick to that, despite its various downsides, because the upside is bigger.
In the present case, it means we haven't applied any moderation downweights to this post, even though it's obviously the sort of thing we would downweight under other circumstances, since it's neither particularly substantive nor intellectually interesting (though it could be some other kind of interesting, at least to some readers).
The actual content of the post is straightforward and not particularly novel — YC has a stake in OpenAI, that creates a conflict of interest, and the New Yorker is negligent (in the informal sense) for not putting that in their piece.
It’s a sobering reminder and worthy of being on the front page on that basis alone, but I don’t see much of a discussion to be had. “Unusually quiet for a front page post” is probably where this post is meant to be.
i always thought there were two reasons for AI interest on HN.
1. since AI has captured the imagination of capitalists and they think this is the next industrial revolution, they gotta be in it to win it. combined with the fact that i believe most people here are wealthy or at least aspirationally so, that explained half of it.
2. the other half is that AI as a tech is interesting from a mathematical and compsci point of view, tho certainly not interesting enough to justify the proportion of topics about it here.
i guess i should add a 3rd reason.
3. ycomb has a financial stake in spreading the news about how wonderful this tech is!
The only thing that should be surprising to anyone who knows about the early history of OpenAI is how little of it YC owns, given how much it leveraged YC’s credibility to get started (early employees joined an institution called “YC Research”). Once that stake is divided up among all the LPs and small unit holders, it’s not a huge outcome.
Also: nothing gets sustained attention on HN unless good hackers find it interesting. Our entire objective is to be the website that attracts the best hackers, serves them the most interesting content and facilitates the most interesting discussions. That can’t happen if we’re nefariously pushing a commercial agenda.
Could someone (non-AI) summarize this? I'm sorry but I just literally don't have time to even read long posts from very reputable sources. I know I need the info but time just isn't there in my life right now.
So what does the true definition of "AGI" actually mean? It depends on who you ask.
It appears to many to mean "A Great IPO" or "A Gigantic IPO" at this point rather than "Artificial General Intelligence" which has been clearly hijacked to mean something else.
One key thing I've heard about AGI which I think would be the most determining factor for me is a model that learns on the fly. Which could be done one way or another, but when you consider that LLMs basically run like "ROM" files, it makes it a little complicated.
I think we need to re-imagine how LLMs are built, train, and run. But also, figure out how to drastically lower the cost of running them.
I mean, the goalposts shifted. The game Go used to be considered to require true AI. Passing the turing test. Scanning, analyzing and improving complex codebases largely on their own would have been considered some sort of AGI by me 6 years ago.
Now sure, we all know they lack true understanding. But it gets blurry at times what that does mean.
But I don't buy that there will be a magic point, where self improving AGI explodes towards singularity. The current approach is very, very energy and compute intense and that is unlikely to change.
If your stake is > 30 billion seems more of a reasonable and realistic criteria to me.
Jessica Livingston's personal stake in OpenAI is maybe at most 0.1% or less and Paul Graham's, afaik, is 0.
So the bias doesn't seem as large as OP thinks
*https://xcancel.com/paulg/status/2041366050693173393
And "toughness, adaptability, and determination" >>> "ambition", frankly
"Less" doesn't mean "not at all", of course—that would be too big a loophole. But it does mean strictly less, and we stick to that, despite its various downsides, because the upside is bigger.
In the present case, it means we haven't applied any moderation downweights to this post, even though it's obviously the sort of thing we would downweight under other circumstances, since it's neither particularly substantive nor intellectually interesting (though it could be some other kind of interesting, at least to some readers).
It’s a sobering reminder and worthy of being on the front page on that basis alone, but I don’t see much of a discussion to be had. “Unusually quiet for a front page post” is probably where this post is meant to be.
As far as I know this is the first time anyone has publicly claimed to know, quoting insider sources, what YC's actual stake in OpenAI is.
1. since AI has captured the imagination of capitalists and they think this is the next industrial revolution, they gotta be in it to win it. combined with the fact that i believe most people here are wealthy or at least aspirationally so, that explained half of it.
2. the other half is that AI as a tech is interesting from a mathematical and compsci point of view, tho certainly not interesting enough to justify the proportion of topics about it here.
i guess i should add a 3rd reason.
3. ycomb has a financial stake in spreading the news about how wonderful this tech is!
lolol
Also: nothing gets sustained attention on HN unless good hackers find it interesting. Our entire objective is to be the website that attracts the best hackers, serves them the most interesting content and facilitates the most interesting discussions. That can’t happen if we’re nefariously pushing a commercial agenda.
I'd go as far to say that it's impossible at this point to form an AI company without YCombinator not investing in it.