There's a very real possibility that AI proponents completely lose the next generation of adults. The output is not enjoyable to consume, the people who rely on it are not cool, and the effects of using it are unpleasant and hard to defend on aesthetic, intellectual, or moral grounds.
There are real use cases for this technology! But the idea that the generation of superficially plausible text is "the next Industrial Revolution" comes out of the same mindset that has turned a neat technology into a banal hellscape for consumers and employees. We desperately need some leadership in companies or institutions that can place this technology in its proper context, and leverage it without getting manic about it.
Social media isn’t always about consuming content. It’s also about getting jolts of momentary joy and reward. You get those in two ways: seeing cool things, and participating in cool things. Especially cool things before they go viral. Clicking like on a post that isn’t viral yet, and gambling to yourself whether it will go viral, has the same dopamine flux when it pays off as winning at the slots. Even my reward-defective brain manages to eke out a moment of reward from that. So if you simply remove the content, what’s left is the gambling market. Gambling on something you upvote going viral isn’t about how much content there is in what you placed your bets on, it’s about being able to have that special knowing look when someone tells you about it because you’ve just won the socio-memetic lottery. And AI isn’t doing anything whatsoever to stop that reward loop.
I proposed once a while back that we should have the HN admins strip all integer counts for a week server-side, to see if the site quality improved or worsened during that time. The mods suggested I ask HN, so I did. HN loathed the idea of it, for every possible reason except this one: removing all those integers would be like quitting gambling cold turkey after years of pulling the vote lever every day. I’m not much less vulnerable to this than everyone else, but I still want to see it happen someday. I remain reasonably confident that our social media site’s quality would skyrocket after a couple days of our posts and comments being disinfected of make-integer-go-up jackpots.
There's the classic "I wish facebook had a dislike button" or the equivalent for twitter.
But in the thread-based forum context, removing the downvote has interesting effects. For one, it stops people who down-vote-brigade to lower visibility. It also stops the "I don't like that guy" engagement and works on a more positive "I appreciated this comment" mode.
It's not one-size fits all but I've seen positive effects on more marginalized forums.
The content being untrustworthy doesn't matter when it comes to social media, as most of what is enticing about social media nowadays isn't the content of the content. It's the fact that there is a never-ending stream of content specifically catered to maximize your dopamine to keep you scrolling.
So much of social media nowadays is just low quality clips of TV shows/movies with an AI-generated song over them. Or the same Minecraft parkour map as an AI voice recites an r/AmITheAsshole post. Or AI-generated funny videos. The quality of the content doesn't matter at all.
Anyone I've talked to about how it was all just AI just responds with something akin to "I don't care if it's AI, it's funny! Let people enjoy things!"
IMHO shrugging it off as “superficially plausible text” is the extreme to the other side.
We’re past plausible text since GPT-2 and it’s undeniable that the technology is making waves right now and is having an impact.
As you can’t judge the impact of the Industrial Revolution by the first steam engines, you can’t dismiss the impact the technology is having right now.
I have. It's overly polished, formulaic and dull. It's devoid of any of the qualities that make music interesting. There's nothing a human is trying to communicate. Perhaps it could be used as elevator or hold music.
I agree, it's shockingly good these days; we can argue about morality etc, fine, but burying one's head in the sand and claiming it's bad puts you at odds with reality, which isn't a good place to be.
It's pretty silly that so many people take as an axiom that the human brain basically has a monopoly on certain patterns of electrical signals, and have semi-religious beliefs that this will always be the case.
support of all kind (including voice), marketing, real-estate, financial... yes, a ton of fields are being very impacted right now but right now doesn't even matter, what matter is what we know it will reach as theory will become practice.
“In producing textiles but has there been actual positive impact in other sectors?”
I’m sure the Industrial Revolution didn’t just happen all at once, it started somewhere and crept.
Isn't it bad now that Sam Altman and the others are backpedaling on this and going "jobs are going to still exist you just can't imagine them!" because the PR problem was getting so big? [1]
Like don't we want people running these companies to be honest to the public rather than misdirection?
> "jobs are going to still exist you just can't imagine them!"
Ironically, this makes even less sense.
If (ostensibly) the goal of developing LLMs was so we can all create more while working less, but he also assures us there will be just as much work in the future, then what was the point of this tech in the first place?
> There's a very real possibility that AI proponents completely lose the next generation of adults.
The college-age students I interact with hate AI content from other people, but they love using AI for their own work.
They'll pump AI generated memes and AI altered images all day long. Then they'll use ChatGPT to do their homework and write their resume, then look for an AI tool that will spam apply to jobs for them. Then when they get the job they plan to use ChatGPT to level the playing field with more experienced, older peers.
That's not even getting into the AI entrepreneurs who think they're going to use AI to start a company or find a winning strategy to trade memecoins or bet on PolyMarket so they don't have to get a job at all.
I think the next generation is all-in on AI for their own use. They see it as their advantage over the boomers occupying all the good jobs. They think ChatGPT is their cheat code for getting into these companies and taking those jobs.
> comes out of the same mindset that has turned a neat technology into a banal hellscape for consumers and employees
I'm going to say up front that I'm not as familiar with this period of history as I should be, but -- would it be totally unfair to say the same of the "Industrial Revolution"?
I'm not gonna say they're equivalent by any means, but my understanding is the "Industrial Revolution" was hellish for many people. Maybe the mistake is the framing that "the revolution" or "the next big thing" is always a good thing?
> the mistake is the framing that "the revolution" or "the next big thing" is always a good thing?
They are good things. If you were an adult, male aristocrat, yes, your untouched meadows and streams got tainted. If you were a woman you stopped dying in childbirth. If you think of infants as people, they stopped massively dying.
The Industrial Revolution was good. But it also required erecting the modern administrative state to manage. People had to soberly measure the problems, weigh the benefits and risks, and then invent new institutions and ways of thinking to accommodate the new world.
It was good on a long time scale, but I think the parent poster refers to the short term. If I recall correctly, during the early Industrial Revolution the average life span decreased, child mortality went through the roof, and malnutrition meant adults lost their teeth in their early 20s at best. That was… worse. It took time for the revolution to become a net-positive for the average person (which I certainly wouldn’t dispute).
> They are good things. If you were an adult, male aristocrat, yes, your untouched meadows and streams got tainted. If you were a woman you stopped dying in childbirth. If you think of infants as people, they stopped massively dying.
That happened in the Second Industrial Revolution. The First Industrial Revolution was much less comfortable for both workers (who were given much worse working conditions) and the aristocracy (whose landholdings were much less valuable) - it was the middle class who benefited.
> The Industrial Revolution was good.
The outcomes of the Industrial Revolutions were good. The experience of living through those revolutions was mixed.
The public can't see any trains, electricity, concrete or glass windows, they see employment going away as workers and zero benefit as consumers.
Maybe AI enables great inventions in a decade, but for now the only appeal is that multinational corporations get to fire workers and everything's filled with slop. Of course they're not happy.
> a very real possibility that AI proponents completely lose the next generation of adults
I doubt it. AI seems fundamentally useful. If the guys at the top can’t get their shit together with messaging and strategy, and it increasingly looks like they can’t, they’ll be replaced before an entire generation is potentially rendered permanently uncompetitive. (And to be clear, there is no rush to adopt.)
> We desperately need some leadership in companies or institutions that can place this technology in its proper context
We need the public debate to stop being set by Altman, Musk et al. We need our generation’s Dickens, Tolstoys, Sinclairs and Whitmans.
What are the ways potential futures with AI, on the spectrum from the familiar sci-fi AGI to more-subtle forms, could work? What are the novel ways it might not? How does capitalism need to evolve? Electoral democracy? Labour organization? If I think to the last few years of television and movies, Westworld is the only one to have contributed anything original to the discourse since Isaac Asimov’s era of science fiction.
That will happen inevitably, we are throwing spaghetti at the wall right now, and cleaning up the mess, lessons will be learned. The question is whether that phase will lead to real lasting damage and to what. For myself I no longer read cold emails, I believe they are all AI generated, and that communication method may legitimately die culturally. What else will be destroyed?
Many things will change, because many things are currently useless in the world right now, literally most jobs in a way shouldn't even exist. You think a guy behind the mcdo counter should exist? It shouldn't, that just an engineering "mistake" as it can already be solved, the world is just slow to catch-up, but it's not only AI, that's just automation. We banked for decades on jobs that virtually shouldn't exist except for the sole purpose of creating jobs, it's like a giant ponzi scheme literally and it will all catch-up at some point.
I think Society will completely reshape itself over the next decades, likely with UBI and other form of social help and the ones that don't want to partake into the whole "AI orchestration" will just not have any opportunity imo, sad, but this is the way I see it. I truly believe it because myself and ALL the people I know have pseudo-replaced their work with solely orchestrating AI, including very complex jobs and lately because some of my friends asked me, I've also built "agents" that replaced entirely their work, and their employer don't even know about it (customer management, remote) which proves that those jobs shouldn't even exist as they are ALREADY replaceable, all Zoom meetings are immediately recorded, agents do basic loop adversarial with all common models, then proceed with doing tasks and so-on, that last for about 30min and the whole week of work is done, all chats are directly sent to a triage agent as well then the whole rag thing and so on.
My work went from managing/developing 1 repo to 70 repo at once, evening to morning answering questions like a bot 10h a day with 8 monitors in front of my face, and I'm realistic, I know at some point I can literally replace my own self with an AI as well to answer for me, it's just a matter of time.
We need to rethink everything and the whole AI hate from the youth will not change anything about it.
I have multiple friends also running pretty large businesses with 30 or more staff, and right now they are literally at a point where they argue about why they shouldn't fire most of them, it's fuckin sad, but it's the reality.
Many countries have a form of UBI, although it's not guaranteed as the meaning of UBI would in a sense, but look at France with their RSA as an example, if you have no incomes/low incomes, you are entitled to it.
That's the only statement that's true. Admiting to AI use is unfashionable in the western world at this time.
But how much would you like to bet that 90% of those students who were booing also used AI to do their homework for them quite often? So your take away would be "the AI stole their education". No, they were dishonest and the AI helped them cheat themselves out of learning.
Technology doesn't make anything banal or a hellscape, or fire people. Technology is a lever.
If humans use AI to produce worse output because they are too lazy to bother reviewing and iterating on it, that is a human problem. If humans are going to use AI to help them exploit other humans more efficiently, that is also caused by the human rather than the technology.
Also, the ChatGPT moment for humanoid robots is coming this year or next. It will become very obvious that AI use in these robots is not just superficially plausible text.
> But how much would you like to bet that 90% of those students who were booing also used AI to do their homework for them quite often? So your take away would be "the AI stole their education". No, they were dishonest and the AI helped them cheat themselves out of learning.
This is like saying a smoker can't criticize the tobacco industry. It's entirely possible to recognize that AI in school is a huge problem while (hypothetically, in this case) still using it. Indeed, if enough of your peers are using it and you do not, you are effectively being punished for being virtuous. It's a lot like being the one cyclist in the Tour de France who isn't doping.
Similarly, if your peers aren't able to keep a conversation going in a seminar because they had AI do their reading and assignments for them, then you, as a student, are having your education stolen from you in a very real way. Education is something that happens in community. When enough of your community is using AI, your education will suffer.
Again that is a problem with the group of people and how they use technology rather than the technology itself.
I will die on this hill: AI _properly_ integrated into education will be a huge improvement for students because it will enable each student to have personalized instruction and tutoring.
> The output is not enjoyable to consume, the people who rely on it are not cool, and the effects of using it are unpleasant and hard to defend on aesthetic, intellectual, or moral grounds.
It seems, the word "AI" inherently refers to slop now, which I find kind of tragic. The people drowning the world in AI slop were sloppers before. I cannot imagine they've cared more about quality before AI than they do now. They've just been given a tool that multiplies their slop.
I understand why we blame the tool. Yet, I wished, we'd blame the sloppers. I truly believe that AI can help people to create and build wonderful things that are an expression of their own creativity and thinking. And I'm sure it is happening. It is just not as visible as all the slop.
I don't really think we should talk about it with "use cases" anymore when it can virtually replace/enhance literally almost any form of white collar work and soon physical labor as well (people will act surprised the moment it comes of course, the same as with LLMs despite all the researchs made prior, if theory supports it = it will be), of course humanoids will be in every homes and they'll cost the same as a phone, soon enough, and we will also not be able to live without.
We don't talk about human intelligence with "use cases", I think we need to be realistic about what AI will be in our lives, most people already can't do without, and this will without doubt expand further.
Unemployment rampant. All production remains in the hands of a few. All power (tokens) remains in the hands of a few. Goods are cheaper but no one can buy them. Path to the upper class now guarded closely by tokens, potential avenues for entrepreneurs diminish rapidly. Own an AI or compute, get someone to give you tokens, or live in poverty.
Distribution of abundance in current time is close to evil, America reducing entitlements and support (not expanding). Rampant waste. No reason to think any of this will change.
> Cost of goods and services drops by orders of magnitude at every point in the supply chain.
That sounds great, but how are LLMs supposed to achieve this? You can't just say "AI will make a utopia". You have to present a vision for how it will get us there.
I'm tired of hearing about how AI will solve all the worlds problems. I want to see actual progress towards achieving these goals. And for the most part that hasn't manifested. Most people would consider AI to have had a net negative impact on their lives.
That's quite an unsubstantiated leap. The world has gone through plenty of digital transformations and the number of people in poverty has only _shrank_.
It's hard not to make that leap when so many layoffs are (according to PR releases anyway) attributed to AI adoption. Even if the reality on the ground is that many of these workforce reductions are to make the balance sheets look better (presumably as a bet on AI), it's impossible to ignore the accelerating wealth gap, especially in the context of the gutting of regulations and state actors leveraging world events on prediction markets. We will not be given a fair deal if we simply wait for our benefactors to provide one.
The number of people in absolute poverty has shrunk, but the proportion of national income held by the wealthy has increased, so economic mobility is declining. There are many reasons for this, but typically deployment of technology is a capital expense and employers aim to realize all the gains from their investment, notwithstanding the upskilling and/or deskilling effect it has on workers, who are treated as fungible economic units rather than people. Nobody likes this except capitalists.
In particular, CS students are feeling it more than most majors. (Especially compared with the shock that most of them probably thought CS was the field for job security.)
Saw an article recently that said CS majors were up there with performing arts majors and art history majors in terms of unemployment rate.
Yes, but during those transformations, the CEOs of the companies selling the products involved weren't actively and aggressively marketing them as being able to replace all the humans they employ.
You can't have it both ways: either LLMs are an amazing, revolutionary technology that can replace many human jobs in unprecedented ways, or it's going to be a mild transition that really only helps people.
> the CEOs of the companies selling the products involved weren't actively and aggressively marketing them as being able to replace all the humans they employ
The assembly line was explicitly about replacing skilled with relatively unskilled labor.
It isn't the first time a new technology has been pitched to replace many worker's jobs, both successful and unsuccessful versions of the promise have come to pass several times.
I think what they are saying is "that something can replace a job does not inherently imply the next step is poverty". From that perspective, you can absolutely have it both (and many other combinations of) ways.
That was exactly what a great many things were marketed as, such as the jacquard loom and dynamite.
What actually happened in each case was that employment went up for a good long while, as the efficiency boost to the sectors touched made investment far more viable. Eventually successive rounds of automation did reduce employment in each of weaving and mining, but it wasn’t an overnight catastrophe as initially advertised or feared.
Do we want to be distracted by sewing shirts and writing Python scripts when the hardware can do the math for us?
Programmers (and other workers but this a tech centric forum) need to start to accept that programming was a necessary evil of the before times. We didn't have the theories. We didn't have the manufacturing techniques.
Before hardware was powerful enough to run models on a laptop we needed all that hand crafted custom state management to avoid immediate resource exhaustion. Or to hide the deficiencies of the chips of the day.
For all the appeals of tech workers to a lean into a high tech life, programming as humans did in the before times seems pretty outdated. Bring back rotary phones too, I guess.
If we don't have jobs we are free to:
Take up arms against an exploitative political and owner class minority.
Make sure grandma and the kids are ok. Everyone has enough to eat?
Free the sweatshop kids we exploit without giving them a choice of "the mines" or college, from obligations to our own meat suits
???? What else?
Whole lot of job culture too was just busy work to satisfy the beliefs of they who are generationally churning out of life. Bye grandpa; thanks for zero assurances but tons of obligations; you won't be missed!
Elon and such are not an immutable constant of the universe. Few more years and he'll be Mitch McConnelling out on TV. Especially with all the drug abuse.
Everyone under 50 needs to prepare for the future not LARP the past.
I am meeting with my state legislators this week to, among other things, discuss how big tech should be on the same hook as the food industry who have to label their products in the open.
How all the auto standards are openly legislated, AI standards should be as well. It's just electrical physics not magic.
How like the government has to release laws, big tech should have to release all code, guiding theoretical principles, training and development environments and attest that is what they loaded on those servers.
Use their tools against them; they have the government in their cornering giving them handouts. Go get yours.
You all came up in a society that afforded zero assurances this whole time. Rather than idle about jerking off the American ego perhaps you should have listened to everyone saying this was coming a decade ago. Two decades ago. 4 decades ago.
I have zero respect for my fellow Americans. Willfully ignorant and willingly exploited serfs. Forget I said anything; you all didn't do the political action work to put me on the hook for your healthcare so thoughts and prayers, HNers.
At this point money is essentially a social construct. None of the billionaires have a Scrooge McDuck vault full of gold coins.
Think ST:TNG; automation makes enough stuff. Why worry about money?
So focus on political action then; log off this VC funded freebie intended to ameliorate your feelings about the rich owners and operators of this site, and do like they do; tell government to make things right by you or we replace government.
You think PG is sitting on the sidelines letting Congress figure out themselves? He's putting his thumb on the scale through his actions through social networking with politicians.
Gotta leave the basement and do the work
Americans are heavily propagandized and naive af. So exhausted by educated morons.
The funny thing is that it's not even true. People invested in AI just glee at the thought of common men in abject poverty, so this is the marketing that stuck.
Shows you don't need to have red skin and horns to delight in the suffering of starving people.
the same people who have been using the AI to write their papers, etc.. while supposedly "not liking it". Classic hypocrisy. You can't have it both ways.
College graduates being that myopic and failing at such basic logic. One can only wonder about the quality of education they've got and how it would help them in the modern technological world. Though being that hypocritical may be they would exactly do very well.
>University of Central Florida’s College of Arts and Humanities and Nicholson School of Communication and Media
They are both right, the revolution needs to be oriented for ordinary people and college kids to benefit from it or else their attitude is wholly justified. There's basically no reason for them to cheer on a future of trillion dollar corporations using AI services to battle for knowledge work market share.
My first day of orientation at the CS dept was at the height of the dot com crash. I think I got told by 20+ seniors that day to drop out before paying a single bill. That it was all pointless and the internet was an over valued bubble and no one was getting hired. Mood on campus was scary for almost two years post the crash. If we had social media back then I can only imagine how much more fears would have been amplified.
In the past, "labor saving technology" has always spawned alternate jobs that people could take with some retraining. This time it might be truly different. If one day AI can actually do all knowledge work, there might not be anything left for former knowledge workers to do. There's no physical law that says new technology necessarily produces 1:1 new, different jobs.
> In the past, "labor saving technology" has always spawned alternate jobs that people could take with some retraining.
Labor saving technology does not create enough alternative jobs to employ all those that it displaced, otherwise it wouldn't be labor saving.
Instead, the surplus created by these technologies allows that society to deploy labor on less immediately necessary jobs. These jobs weren't created by the technology, they were always there, but society did not have the resources to staff them (think education, research, academia, merchants, etc.)
This dynamic has been true since pre-historic times, so you'll need some extraordinary evidence if you want us to believe this time is different.
Many people who pointed out the Industrial Revolution becomes the basis of modern quality of life skip what happened in between the 17xx-18xx until today.
Things like Unions, Wars, etc.
What comes after new technology has always been the elite class owning them all and forcing everybody else to suffer until something managed the distribution of resources slightly better (War forces that).
The Luddites were mad not because the machines put them out of work but because the machines were supremely shitty. The machines were dangerous and they made lousy products that reflected a lack of pride in workmanship.
The Luddites were all for saving labor, but not if enshittified products and slavery to unreliable machines were the price.
Many Luddites were protesting labor conditions. At the time the majority of labor laws were being written by the capital class with the help of political leaders and the constabulary. Common complaints were working hours, child labor, safety, wages, and protection from furlough. There were some who protested the quality of the product the machines created... but I would say those are the minority.
Destroying the machines was a way to gain leverage for a class of people who had none. People had been using looms for centuries. It wasn't the technology that was the problem... that's what the victors, the capitalists, have written was the reason.
Right, read the room. Tell them that "there are challenges ahead, but their excellent education and optimism will overcome even the most ominous obstacles, technological or otherwise."
> their excellent education and optimism will overcome even the most ominous obstacles, technological or otherwise
Or, alternatively, that we need the humanities today in a fundamental, possibly existential, way. If AI is another Industrial Revolution, rise to be our Sinclairs, Dickens and Tolstoys.
It's interesting that I'm only seeing this kind of anti-ai tendency only in American/Western art circles. Anywhere else in Middle East/Asia, artists are having fun experimenting with it.
Anyone can pick up a pencil and practice for hours a day! You can look out a window for inspiration! There is no "gatekeeping" art, only people upset it doesn't come as easily to them as B2B SAAS and confusing real effort and introspection as "gatekeeping".
The AI art people were so happy to rub it in artists face, that finally, without effort or appreciation, they no longer had to pay the skilled person for an image.
AI has been the “next industrial revolution” since the 70’s and 80’s. We’ll have a few more RoboCop movies and then things will be as they always are after hype cycles
That is nauseating to watch, she is an abysmal public speaker, arrogant, extremely uninspiring, and generally very out of touch. I would feel this way if she was reading a review of cat in the hat. Letting this woman speak about AI, what a disaster.
Timestamp 1:20:50 is about where the clown show starts. Totally out of touch. Her nervous giggling and throwing her hands up when she realizes the audience doesn't think AI is the greatest invention since sliced bread.... Wow.
I suspect for CS it would have been outright food riots. The humanities are probably the best insulated from AI as the uncanny valley is really obvious in AI literature and art. CS is the final stage in the “programming myself out of a job” meme which is quite depressing if you’re just getting your first job (or, more likely at the moment, not.)
Rightfully so. Unfettered capitalism will only end with a bunch of rich people producing and selling the means of living to the rest of us at just the right markup to keep their feet on our throats. Organized labor needs a resurgence in a big way.
Owe is an interesting choice of a word. Don't get me wrong, I personally am of the opinion that, by default, most schools for most programs, the related body of works can be accomplished by a warm body ( some of it based on personal anecdotes -- in US mind you ). There are exceptions and those include some non-humanities and, well, people who are curious ( but that was always true for them ).
Still, just because a technology facilitates something does not make their distaste any less potent. If anything, they recognize how much of world's building blocks are a fancy facade ( mild alliteration intended ).
Perhaps, owe was a poor word to use too. I will admit that, however I did not think that would be a point of focus in my comment at the time.
> in US mind you
That is my only reference.
> Still, just because a technology facilitates something does not make their distaste any less potent.
Sure, I agree once again. I may have not explained my position well initially. I just cannot help but feel it's a little hypocritical. And again, hypocritical might be a poor word to use.
We have kids booing a commencement speaker after her AI comment (which I think was a distasteful comment), but at UCLA's graduation a few days ago, we had this: https://www.youtube.com/shorts/zSqOPOzrIig
I think why I am having difficulty describing what I am thinking is because there is not one homogeneous group of students. There is clearly a subset of students that oppose AI's current and future costs/benefits. Though, at the same time, there is a different subset of students that heavily rely on AI. Some to even a problematic degree.
I have a few friends that are professors at a prestigious, private university in my city. They have all shared their little tricks in how they are trying to combat AI usage in academics. Some put hidden white text in the margins of their assignments. When citations are submitted with work, they look for the the 'utm?=chatgpt' in the urls. Some of the foreign language professors craft writing prompts with words that they know LLMs often tend to translate incorrectly.
Based on the research I can find via a few quick searches, it appears that in the populations of the studies, AI usage is far more common than AI abstinence. I imagine these students want to use AI to benefit themselves but not harm themselves in the future. I do not fault them for that in the slightest, but I do not think that is how things are going to end up working out. I strongly believe the students that misuse AI to do their work for them -- not help them -- will be in for a rude awaking.
As I am reading the source, it is more weird than I initially accounted for. The speech she gave was fairly benign compared to some of the bigger quotables from Musk, Altman or other AI industiry figures. Basically, march of time and 'I remember when' kinda nostalgia.
But given how weirdly benign the speech was, I have to ask. Why the boos? Is there some context I am missing? Was the speaker recently on the wrong side of history?
I am asking half-jokingly, but it seems like there is a giant part that is missing somewhere and I have no reasonable way of explaining it.
I am lost as well. I am more confused about why she was even talking about AI at all in the speech. She could have just hit the high-notes and used the same cliches as many other speeches.
I feel mentioning AI in a commencement speech would be like me stating something in a graduation speech like, "Congratulations, class of 2026. The Carolina Hurricane have swept their opponents in both rounds of the NHL Stanley Cup playoffs. May your future be as bright as theirs."
No telling though. I am completely unaware of who the speaker even is.
Most people in uni have compulsory humanities courses, so I imagine it's not too hard for them to attribute actions by moneyed interests to boost AI to the furtherance of capital, surveillance, and a widening of the economic gap. The fact remains, though, most of these degrees (with the obvious exception of those specific to current AI/LLM technologies) could have been attained without AI before.
Sure, I do not disagree in the slightest. However, I think degrees, while optimistically serving as a certification of a certain level of understanding/knowledge, also provide a sort of social signal. However, Goodhart's Law is still in full effect, so that does complicate matters a bit.
> Meta and Jeff Bezos being held up in a good light
The message to a group of graduating artists should have been about the literature, art and public works that turned the Industrial Revolution's hyper-concentrated gains into broadly-felt benefits. (And then, after WWII and the Green Revolution, encouraged us to start reckoning with its environmental cost.)
AI is potentially—and with increasing confidence day to day—showing itself to be useful. That deserves neither worship nor demonization. Yet history—told by the humanities!—tell us, it probably hasn’t started in the right leaders’ hands. It is the role of the humanities to show and guide the public through that debate and reconciliation.
I mean, duh. Do we really think someone with the title of "vice president of strategic alliances at Tavistock Group" lives in the same universe as the rest of us? In her alternative universe, Zucc and Bezos are heroes to look up to. These people have no actual interaction with the rest of us, and just assume their world view is universally held.
Look how genuinely surprised she was by the audience's reaction. In their world, AI is an unambiguous good.
There are real use cases for this technology! But the idea that the generation of superficially plausible text is "the next Industrial Revolution" comes out of the same mindset that has turned a neat technology into a banal hellscape for consumers and employees. We desperately need some leadership in companies or institutions that can place this technology in its proper context, and leverage it without getting manic about it.
So, now people are in groups and chats full of bots posting exactly what they want to hear.
Instead of meta b it's states, companies, or individuals hoping to make money from their followers
I proposed once a while back that we should have the HN admins strip all integer counts for a week server-side, to see if the site quality improved or worsened during that time. The mods suggested I ask HN, so I did. HN loathed the idea of it, for every possible reason except this one: removing all those integers would be like quitting gambling cold turkey after years of pulling the vote lever every day. I’m not much less vulnerable to this than everyone else, but I still want to see it happen someday. I remain reasonably confident that our social media site’s quality would skyrocket after a couple days of our posts and comments being disinfected of make-integer-go-up jackpots.
There's the classic "I wish facebook had a dislike button" or the equivalent for twitter.
But in the thread-based forum context, removing the downvote has interesting effects. For one, it stops people who down-vote-brigade to lower visibility. It also stops the "I don't like that guy" engagement and works on a more positive "I appreciated this comment" mode.
It's not one-size fits all but I've seen positive effects on more marginalized forums.
So much of social media nowadays is just low quality clips of TV shows/movies with an AI-generated song over them. Or the same Minecraft parkour map as an AI voice recites an r/AmITheAsshole post. Or AI-generated funny videos. The quality of the content doesn't matter at all.
Anyone I've talked to about how it was all just AI just responds with something akin to "I don't care if it's AI, it's funny! Let people enjoy things!"
We’re past plausible text since GPT-2 and it’s undeniable that the technology is making waves right now and is having an impact.
As you can’t judge the impact of the Industrial Revolution by the first steam engines, you can’t dismiss the impact the technology is having right now.
There was recently an article shared around here that an LLM diagnosed ER patients more accurately than doctors.
Looking beyond LLMs image analysis to detect cancer and other diseases.
Like in coding, AI can and should be a useful tool for the human who decides and is ultimately responsible.
AI-made music is frankly pretty good, do you actually listen to it?
It's pretty silly that so many people take as an axiom that the human brain basically has a monopoly on certain patterns of electrical signals, and have semi-religious beliefs that this will always be the case.
Like don't we want people running these companies to be honest to the public rather than misdirection?
[1]. https://www.platformer.news/sam-altman-ai-backlash/
Ironically, this makes even less sense.
If (ostensibly) the goal of developing LLMs was so we can all create more while working less, but he also assures us there will be just as much work in the future, then what was the point of this tech in the first place?
What about any of these folks’ biographies hints that they’re capable of being honest?
The college-age students I interact with hate AI content from other people, but they love using AI for their own work.
They'll pump AI generated memes and AI altered images all day long. Then they'll use ChatGPT to do their homework and write their resume, then look for an AI tool that will spam apply to jobs for them. Then when they get the job they plan to use ChatGPT to level the playing field with more experienced, older peers.
That's not even getting into the AI entrepreneurs who think they're going to use AI to start a company or find a winning strategy to trade memecoins or bet on PolyMarket so they don't have to get a job at all.
I think the next generation is all-in on AI for their own use. They see it as their advantage over the boomers occupying all the good jobs. They think ChatGPT is their cheat code for getting into these companies and taking those jobs.
I'm going to say up front that I'm not as familiar with this period of history as I should be, but -- would it be totally unfair to say the same of the "Industrial Revolution"?
I'm not gonna say they're equivalent by any means, but my understanding is the "Industrial Revolution" was hellish for many people. Maybe the mistake is the framing that "the revolution" or "the next big thing" is always a good thing?
They are good things. If you were an adult, male aristocrat, yes, your untouched meadows and streams got tainted. If you were a woman you stopped dying in childbirth. If you think of infants as people, they stopped massively dying.
The Industrial Revolution was good. But it also required erecting the modern administrative state to manage. People had to soberly measure the problems, weigh the benefits and risks, and then invent new institutions and ways of thinking to accommodate the new world.
That happened in the Second Industrial Revolution. The First Industrial Revolution was much less comfortable for both workers (who were given much worse working conditions) and the aristocracy (whose landholdings were much less valuable) - it was the middle class who benefited.
> The Industrial Revolution was good.
The outcomes of the Industrial Revolutions were good. The experience of living through those revolutions was mixed.
Maybe AI enables great inventions in a decade, but for now the only appeal is that multinational corporations get to fire workers and everything's filled with slop. Of course they're not happy.
I doubt it. AI seems fundamentally useful. If the guys at the top can’t get their shit together with messaging and strategy, and it increasingly looks like they can’t, they’ll be replaced before an entire generation is potentially rendered permanently uncompetitive. (And to be clear, there is no rush to adopt.)
> We desperately need some leadership in companies or institutions that can place this technology in its proper context
We need the public debate to stop being set by Altman, Musk et al. We need our generation’s Dickens, Tolstoys, Sinclairs and Whitmans.
What are the ways potential futures with AI, on the spectrum from the familiar sci-fi AGI to more-subtle forms, could work? What are the novel ways it might not? How does capitalism need to evolve? Electoral democracy? Labour organization? If I think to the last few years of television and movies, Westworld is the only one to have contributed anything original to the discourse since Isaac Asimov’s era of science fiction.
I think Society will completely reshape itself over the next decades, likely with UBI and other form of social help and the ones that don't want to partake into the whole "AI orchestration" will just not have any opportunity imo, sad, but this is the way I see it. I truly believe it because myself and ALL the people I know have pseudo-replaced their work with solely orchestrating AI, including very complex jobs and lately because some of my friends asked me, I've also built "agents" that replaced entirely their work, and their employer don't even know about it (customer management, remote) which proves that those jobs shouldn't even exist as they are ALREADY replaceable, all Zoom meetings are immediately recorded, agents do basic loop adversarial with all common models, then proceed with doing tasks and so-on, that last for about 30min and the whole week of work is done, all chats are directly sent to a triage agent as well then the whole rag thing and so on.
My work went from managing/developing 1 repo to 70 repo at once, evening to morning answering questions like a bot 10h a day with 8 monitors in front of my face, and I'm realistic, I know at some point I can literally replace my own self with an AI as well to answer for me, it's just a matter of time.
We need to rethink everything and the whole AI hate from the youth will not change anything about it.
I have multiple friends also running pretty large businesses with 30 or more staff, and right now they are literally at a point where they argue about why they shouldn't fire most of them, it's fuckin sad, but it's the reality.
We'll have no UBI and little purpose.
That's the only statement that's true. Admiting to AI use is unfashionable in the western world at this time.
But how much would you like to bet that 90% of those students who were booing also used AI to do their homework for them quite often? So your take away would be "the AI stole their education". No, they were dishonest and the AI helped them cheat themselves out of learning.
Technology doesn't make anything banal or a hellscape, or fire people. Technology is a lever.
If humans use AI to produce worse output because they are too lazy to bother reviewing and iterating on it, that is a human problem. If humans are going to use AI to help them exploit other humans more efficiently, that is also caused by the human rather than the technology.
Also, the ChatGPT moment for humanoid robots is coming this year or next. It will become very obvious that AI use in these robots is not just superficially plausible text.
This is like saying a smoker can't criticize the tobacco industry. It's entirely possible to recognize that AI in school is a huge problem while (hypothetically, in this case) still using it. Indeed, if enough of your peers are using it and you do not, you are effectively being punished for being virtuous. It's a lot like being the one cyclist in the Tour de France who isn't doping.
Similarly, if your peers aren't able to keep a conversation going in a seminar because they had AI do their reading and assignments for them, then you, as a student, are having your education stolen from you in a very real way. Education is something that happens in community. When enough of your community is using AI, your education will suffer.
I will die on this hill: AI _properly_ integrated into education will be a huge improvement for students because it will enable each student to have personalized instruction and tutoring.
It seems, the word "AI" inherently refers to slop now, which I find kind of tragic. The people drowning the world in AI slop were sloppers before. I cannot imagine they've cared more about quality before AI than they do now. They've just been given a tool that multiplies their slop.
I understand why we blame the tool. Yet, I wished, we'd blame the sloppers. I truly believe that AI can help people to create and build wonderful things that are an expression of their own creativity and thinking. And I'm sure it is happening. It is just not as visible as all the slop.
We don't talk about human intelligence with "use cases", I think we need to be realistic about what AI will be in our lives, most people already can't do without, and this will without doubt expand further.
That being said we already have relative superabundance and we're more miserable than ever, so it's not clear that more of it will cheer us up.
It's not great that we can buy iphones (and AI is going to cause all electronics to be scarce, so much for abundance there)
Distribution of abundance in current time is close to evil, America reducing entitlements and support (not expanding). Rampant waste. No reason to think any of this will change.
That sounds great, but how are LLMs supposed to achieve this? You can't just say "AI will make a utopia". You have to present a vision for how it will get us there.
I'm tired of hearing about how AI will solve all the worlds problems. I want to see actual progress towards achieving these goals. And for the most part that hasn't manifested. Most people would consider AI to have had a net negative impact on their lives.
To be fair, this isn’t the commencement speaker’s job.
I would 100% expect a commencement speaker to be hyping me up for what comes next.
Saw an article recently that said CS majors were up there with performing arts majors and art history majors in terms of unemployment rate.
You can't have it both ways: either LLMs are an amazing, revolutionary technology that can replace many human jobs in unprecedented ways, or it's going to be a mild transition that really only helps people.
The assembly line was explicitly about replacing skilled with relatively unskilled labor.
I think what they are saying is "that something can replace a job does not inherently imply the next step is poverty". From that perspective, you can absolutely have it both (and many other combinations of) ways.
What actually happened in each case was that employment went up for a good long while, as the efficiency boost to the sectors touched made investment far more viable. Eventually successive rounds of automation did reduce employment in each of weaving and mining, but it wasn’t an overnight catastrophe as initially advertised or feared.
Programmers (and other workers but this a tech centric forum) need to start to accept that programming was a necessary evil of the before times. We didn't have the theories. We didn't have the manufacturing techniques.
Before hardware was powerful enough to run models on a laptop we needed all that hand crafted custom state management to avoid immediate resource exhaustion. Or to hide the deficiencies of the chips of the day.
For all the appeals of tech workers to a lean into a high tech life, programming as humans did in the before times seems pretty outdated. Bring back rotary phones too, I guess.
If we don't have jobs we are free to:
Take up arms against an exploitative political and owner class minority.
Make sure grandma and the kids are ok. Everyone has enough to eat?
Free the sweatshop kids we exploit without giving them a choice of "the mines" or college, from obligations to our own meat suits
???? What else?
Whole lot of job culture too was just busy work to satisfy the beliefs of they who are generationally churning out of life. Bye grandpa; thanks for zero assurances but tons of obligations; you won't be missed!
Elon and such are not an immutable constant of the universe. Few more years and he'll be Mitch McConnelling out on TV. Especially with all the drug abuse.
Everyone under 50 needs to prepare for the future not LARP the past.
How are we not going to be begging whoever controls chip fabs and electrical plants for compute tokens? HOW!? EXPLAIN IT.
I am meeting with my state legislators this week to, among other things, discuss how big tech should be on the same hook as the food industry who have to label their products in the open.
How all the auto standards are openly legislated, AI standards should be as well. It's just electrical physics not magic.
How like the government has to release laws, big tech should have to release all code, guiding theoretical principles, training and development environments and attest that is what they loaded on those servers.
Use their tools against them; they have the government in their cornering giving them handouts. Go get yours.
You all came up in a society that afforded zero assurances this whole time. Rather than idle about jerking off the American ego perhaps you should have listened to everyone saying this was coming a decade ago. Two decades ago. 4 decades ago.
I have zero respect for my fellow Americans. Willfully ignorant and willingly exploited serfs. Forget I said anything; you all didn't do the political action work to put me on the hook for your healthcare so thoughts and prayers, HNers.
Ah so your answer is AI will cause most people to live in abject poverty. Good talk.
Please don’t do this.
What is this? The NBA? You want people to stick to social norms, call it both ways.
ICE has an $80 billion budget.
Demand Congress pay off mortgages rather than hand Leon Skum tens of billions.
There you go. Stability.
[1] https://www.fhfa.gov/data/dashboard/nmdb-outstanding-residen...
Think ST:TNG; automation makes enough stuff. Why worry about money?
So focus on political action then; log off this VC funded freebie intended to ameliorate your feelings about the rich owners and operators of this site, and do like they do; tell government to make things right by you or we replace government.
You think PG is sitting on the sidelines letting Congress figure out themselves? He's putting his thumb on the scale through his actions through social networking with politicians.
Gotta leave the basement and do the work
Americans are heavily propagandized and naive af. So exhausted by educated morons.
Shows you don't need to have red skin and horns to delight in the suffering of starving people.
College graduates being that myopic and failing at such basic logic. One can only wonder about the quality of education they've got and how it would help them in the modern technological world. Though being that hypocritical may be they would exactly do very well.
>University of Central Florida’s College of Arts and Humanities and Nicholson School of Communication and Media
yep, clearly not Stanford.
Yes you can. They use AI and also despise it because it will turn the world into one big caste system. Ones with access to compute, and ones without.
College graduates in a rich, food- and energy-exporting democracy at the centre of the AI build-out will be on the receiving end of this transfer.
The places where should be panic are the Middle East, Russia and South Asia.
Labor saving technology does not create enough alternative jobs to employ all those that it displaced, otherwise it wouldn't be labor saving.
Instead, the surplus created by these technologies allows that society to deploy labor on less immediately necessary jobs. These jobs weren't created by the technology, they were always there, but society did not have the resources to staff them (think education, research, academia, merchants, etc.)
This dynamic has been true since pre-historic times, so you'll need some extraordinary evidence if you want us to believe this time is different.
Things like Unions, Wars, etc.
What comes after new technology has always been the elite class owning them all and forcing everybody else to suffer until something managed the distribution of resources slightly better (War forces that).
Avoiding a repeat of that would be great while also increasing productivity would be good.
The Luddites were all for saving labor, but not if enshittified products and slavery to unreliable machines were the price.
Sounds pretty familiar to me.
Destroying the machines was a way to gain leverage for a class of people who had none. People had been using looms for centuries. It wasn't the technology that was the problem... that's what the victors, the capitalists, have written was the reason.
Well, yeah.
Or, alternatively, that we need the humanities today in a fundamental, possibly existential, way. If AI is another Industrial Revolution, rise to be our Sinclairs, Dickens and Tolstoys.
Hmm, how would we measure and confirm this hypothesis?
Anyone can pick up a pencil and practice for hours a day! You can look out a window for inspiration! There is no "gatekeeping" art, only people upset it doesn't come as easily to them as B2B SAAS and confusing real effort and introspection as "gatekeeping".
The AI art people were so happy to rub it in artists face, that finally, without effort or appreciation, they no longer had to pay the skilled person for an image.
https://www.youtube.com/watch?v=zwYkHS8jvSE
"Passion--let's go!" Lady read the room.
Somehow I have a feeling that the reaction would have been totally different if it would have been the EECS graduates.
Fear and rejection in certain professions is real and maybe even understandable.
I imagine 25 years ago someone telling music graduates “streaming is the future of music distribution” would have received the same reaction.
https://www.youtube.com/shorts/vAn7DsXWQGE
However there was a feeling that “the job” is radically changing right now.
The More Young People Use AI, the More They Hate It
https://news.ycombinator.com/item?id=47963163
Study found that young adults have grown less hopeful and more angry about AI
https://news.ycombinator.com/item?id=47704443
Still, just because a technology facilitates something does not make their distaste any less potent. If anything, they recognize how much of world's building blocks are a fancy facade ( mild alliteration intended ).
> in US mind you
That is my only reference.
> Still, just because a technology facilitates something does not make their distaste any less potent.
Sure, I agree once again. I may have not explained my position well initially. I just cannot help but feel it's a little hypocritical. And again, hypocritical might be a poor word to use.
We have kids booing a commencement speaker after her AI comment (which I think was a distasteful comment), but at UCLA's graduation a few days ago, we had this: https://www.youtube.com/shorts/zSqOPOzrIig
(Student's explanation: https://www.youtube.com/shorts/rswUgIfj1YU)
I think why I am having difficulty describing what I am thinking is because there is not one homogeneous group of students. There is clearly a subset of students that oppose AI's current and future costs/benefits. Though, at the same time, there is a different subset of students that heavily rely on AI. Some to even a problematic degree.
I have a few friends that are professors at a prestigious, private university in my city. They have all shared their little tricks in how they are trying to combat AI usage in academics. Some put hidden white text in the margins of their assignments. When citations are submitted with work, they look for the the 'utm?=chatgpt' in the urls. Some of the foreign language professors craft writing prompts with words that they know LLMs often tend to translate incorrectly.
Based on the research I can find via a few quick searches, it appears that in the populations of the studies, AI usage is far more common than AI abstinence. I imagine these students want to use AI to benefit themselves but not harm themselves in the future. I do not fault them for that in the slightest, but I do not think that is how things are going to end up working out. I strongly believe the students that misuse AI to do their work for them -- not help them -- will be in for a rude awaking.
As I am reading the source, it is more weird than I initially accounted for. The speech she gave was fairly benign compared to some of the bigger quotables from Musk, Altman or other AI industiry figures. Basically, march of time and 'I remember when' kinda nostalgia.
But given how weirdly benign the speech was, I have to ask. Why the boos? Is there some context I am missing? Was the speaker recently on the wrong side of history?
I am asking half-jokingly, but it seems like there is a giant part that is missing somewhere and I have no reasonable way of explaining it.
I feel mentioning AI in a commencement speech would be like me stating something in a graduation speech like, "Congratulations, class of 2026. The Carolina Hurricane have swept their opponents in both rounds of the NHL Stanley Cup playoffs. May your future be as bright as theirs."
No telling though. I am completely unaware of who the speaker even is.
The message to a group of graduating artists should have been about the literature, art and public works that turned the Industrial Revolution's hyper-concentrated gains into broadly-felt benefits. (And then, after WWII and the Green Revolution, encouraged us to start reckoning with its environmental cost.)
AI is potentially—and with increasing confidence day to day—showing itself to be useful. That deserves neither worship nor demonization. Yet history—told by the humanities!—tell us, it probably hasn’t started in the right leaders’ hands. It is the role of the humanities to show and guide the public through that debate and reconciliation.
Look how genuinely surprised she was by the audience's reaction. In their world, AI is an unambiguous good.
Clearly people don't consider it obvious, considering my comment got flagged.