17 comments

  • aetherspawn 1 minute ago
    Great, electrical and mechanical engineers are already underpaid, under appreciated and overworked.

    I’ve always found it amusing that lawyers and accountants flash their license around with pride, put it in their email signatures, etc. and it’s actually a useful license at protecting their profession. An engineering license, on the other hand, is so rarely talked about and never quoted in email signatures and the like. Yet, it takes the longest, most exams and studying the hardest subjects to get one, except for Doctors.

    Anything to make ann Engineering license worth more is good in my books.

  • paxys 1 hour ago
    "Hey ChatGPT, my NYC landlord is raising my rent by $500, and says I must pay by Monday or leave. What do I do?"

    ChatGPT - This is very likely illegal under Housing Stability and Tenant Protection Act of 2019 (HSTPA), specifically New York Real Property Law § 226-c (Notice required for rent increases), RPL § 232-a / § 232-b (Month-to-month termination), RPL § 232-c (Fixed-term lease protections), RPAPL § 711 (Legal eviction procedure) and NYC Admin Code § 26-501+ (Rent stabilization). Here's what you should reply with... And here are some city resources you can contact...

    ChatGPT now - IDK, pay a lawyer.

    So under the guise of "protection" you are taking away the strongest knowledge tool common people have had at their disposal in a generation, probably ever.

    • NewsaHackO 1 hour ago
      Lawyers are making laws protecting lawyers. More seriously, I think part of the issue is people take AI responses very seriously, because it is almost always right about non-nuanced material. So even if it has the disclaimer at the end to talk to a professional, they might forgo that if the answer looks professional enough (such as quoting possibly non-existent statutes, etc.). This issue gets compounded if the person who is prompting it doesn't know the material, is accidentally misframing the question, or not giving it key information that completely changes the scenario. Even in your example, what if the person neglected to say that the raise was two months ago, and they already signed a lease agreeing to the raise? Getting into the weeds of topics like law and medicine can be hard, and both have major consequences when an answer is wrong.

      For engineering (assuming it means civil engineering), that should already be illegal, unless the person who is using the AI is an engineer. Hopefully people aren't building structures with ChatGPT as their staff engineer.

    • terminalshort 1 hour ago
      Their protection, not yours. Hopefully this will draw public backlash just like when they tried to ban Uber and make everybody go back to cabs. Fuck the entire credential cartel based system of societal organization. Burn it to the ground.
      • theturtletalks 8 minutes ago
        They will come for medical advice provided by AI as well. Doctors have been gate keeping that forever and they want you to have to go thru them instead of diagnosing yourself thru AI.

        Yes, there are people that will misdiagnose themselves, but I’ve read stories where doctors ignore patients symptoms or wave them off, and ChatGPT helps them find the underlying issue and actually improve their lives. Even if doctors and the medical field can’t handicap AI giving medical advice, I’m sure they are going to make it much harder for patients to get their hands on their own scans and bloodwork.

    • threetonesun 1 hour ago
      Same question on Google gets you nyc.gov (the actual source!) with the same answer. That page is also always correct for NYC, instead however correct ChatGPT is, which might be 100%... or might not!
      • paxys 1 hour ago
        And what then? People read through all of nyc.gov and the entire city/state legal code to find the exact statute that applies to their scenario?

        In fact government agencies have set up their own chatbots to help people with situations like these, and like the article says those would be illegal under this law as well.

        • threetonesun 1 hour ago
          Was it that hard for you to try the search yourself? The first result was a helpful guide breaking down what to do specific to the scenario you mentioned: https://www.nyc.gov/main/services/rent-increase-guide

          Also NYC is in the process of getting rid of that chatbot.

          • paxys 36 minutes ago
            This is not about you or me, it's about the large chunk of New Yorkers (and people in every city) that:

            - have no resources for a lawyer

            - have limited English skills, and possibly limited literacy in general

            - aren't good with computers/internet

            - have little understanding of the law

            "Oh just browse a complex website" and every other "it works fine for me" scenario doesn't help this class of citizens. A simple chatbot that answers questions does.

            • threetonesun 29 minutes ago
              Apparently you are either a bot yourself or some AI shill. That page is clearer than most ChatGPT results, the official source, and has translations for dozens of languages. Not to mention "aren't good with computers" already rules out using ChatGPT!
      • prasadjoglekar 1 hour ago
        True. But just changing the prompt to include "cite me cases" expands the search to court systems and actual cases. It's pretty useful as a first pass to get a sense for the issues, precedents and laws at stake.
        • ambicapter 1 hour ago
          You know some of those "actual cases" are made up, right? Like, famously, lawyers are filing briefs with made-up citations b/c they used LLMs to draft it.
          • bluepeter 1 hour ago
            Ah ok so only lawyers get to use AI hallucinations! (Actually, CA has a bill pending that AFAIR requires lawyers to manually verify AI citations... which is a lot narrower and better than what NY is trying here.)
      • pixl97 1 hour ago
        Note: 'Same question on Google gets you' can only reasonably be sure for you and no other person. Answers may vary depending on your location and history information.
        • slg 1 hour ago
          This is a strange disclaimer to make specifically about Google when it is even more true for these chatbots.
    • HanClinto 1 hour ago
      One of the big dividers that I see between the "haves" and the "have-nots" is the ability to afford legal representation in civil cases.

      For criminal cases, there are public defenders, but for civil cases, I don't believe there is any such thing?

      If you can afford a lawyer and your opponent can't, there is a lot that you can do to bully your opponent into making it not worth it for them to fight the case.

      One of my controversial opinions is that -- if we can enable easy access to AI, then we can give provide much broader access to legal or medical advice. Maybe not the best, maybe not always right, but even if it's average-ish advice, then I think that could often be better than nothing at all.

      We can't completely prevent bad people from doing bad things with AI, but I see this as one of the clear ways that we could do some really good things with AI.

      • cogman10 19 minutes ago
        IMO, this screams the need for both tort reforms and something like a nationalized representation system.

        Perhaps something like a standard set of filings for a given case. Maybe automated rulings on less consequential motions. Maybe some sort of hard limits on the amount of billable hours a law-firm can work on a case. Anti-slapp laws for sure.

        Like, for example, maybe we allow a total of 100 billable hours worked, with an additional 10 billable hours allowed per appeal. The goal there being that you force lawyers and lawfirms to actually focus on the most important aspects of a case and not waste everyone's time and money filling motions for stuff you are allowed to get, but ultimately has 1% impact on the case. Perhaps you could even carve out a "if both sides agree, then you can extend the billable hours". You could also have penalties for a side that doesn't respond. For example, if you depose them and they fail to follow the orders then they lose billable hours while you get them credited back.

        The main goal here being avoiding both wasting a bunch of court time on a case but also stopping a rich person that can afford an army of lawyers from using that advantage to drive their opponent bankrupt with a sea of minor motions.

      • bee_rider 1 hour ago
        I’m sure this is true to some extent, about the lawyers. But also, I wonder (aka I don’t have any data to back this up, it is just based on random stories I’ve heard) to what extent people “I’m right but can’t afford the lawyer time” as a sort of pride maintaining excuse. Or to what extent lawyers use that as a soft-no to reject clients that they don’t think have a strong enough case.

        Which isn’t to say the world is fundamentally just. Just, in some case the laws are legitimately stacked in favor of the big guys, or you sign a contract without carefully reading it, etc etc.

        • terminalshort 59 minutes ago
          In my experience lawyers will tell you very directly when you don't have a good case, or if you do have a good case but it's not worth pursuing it (the most likely scenario). Also, the time that I did pursue my case, it took around $50,000 to the lawyers before I was able to convince the defendant to settle (for a large multiple of that $50K). If the other side had been more stubborn it would have been around $100K to take it to trial. If I hadn't had the money to pay the lawyer I would have been SOL, and most people don't have $50K to spend on an uncertain outcome like that. So “I’m right but can’t afford the lawyer time” is a very real scenario.
          • cogman10 56 minutes ago
            That $100k is also on the cheap side. If the other side has a lawyer and a lot of money to burn, they can easily hike that way up. Filing a billion motions that your lawyer has to respond to, deposing everyone you've ever met, going after every document you've ever looked at. The more money someone has, the easier it is to make you spend more money, even if you are right.
            • terminalshort 41 minutes ago
              Right. My case was a very simple contract dispute with very little discovery and only a couple of people to depose, so I was lucky there. And the other side did have more money than me, but not so much that they could burn several hundred K on it without feeling it.
          • chimeracoder 50 minutes ago
            > So “I’m right but can’t afford the lawyer time” is a very real scenario.

            For most cases like the ones we're talking about (NYC unlawful eviction and/or tenant harassment), if you have a good case, you don't have to pay up-front. A lawyer will take it on contingency and get paid by the defendant if you win.

            In addition, there are also plenty of free legal resources dedicated to this exact topic as well.

            • terminalshort 39 minutes ago
              True, but it is only an incredibly narrow subset of legal cases where contingency based lawyers exist. As for non LLM legal resources, they are just fine if you have all day to read them and all of another day to draft the required filings, but most people have jobs.
              • chimeracoder 27 minutes ago
                > As for non LLM legal resources, they are just fine if you have all day to read them and all of another day to draft the required filings, but most people have jobs.

                You misunderstand. If you are facing tenant harassment in New York City, there are other avenues for you to resolve it that don't involve engaging a lawyer at all.

                > True, but it is only an incredibly narrow subset of legal cases where contingency based lawyers exist.

                Not really? If anything, there's a pretty narrow subset of cases where it's not possible to get someone on contingency but it is possible to use an LLM to meaningfully push your case forward without one.

        • cogman10 59 minutes ago
          $50k is going to be on the cheap side for any case that ultimately involves the court. Anytime a case goes to trial, you can easily be looking at $1M+.

          There's a reason companies keep lawyers on staff. It's a whole lot cheaper to give a lawyer an annual salary than it is to hire out a lawfirm as the standard rates for law-firms are insanely high. On the low end, $150/hour. On the high end, $400. With things like 15 minute minimums (so that one draft response ends up costing $100).

          Take a deposition for 3 hours, with 2 lawyers, that'll be $2400.

          Not being able to afford a lawyer is no joke.

          • freejazz 3 minutes ago
            In house counsel aren't doing trials
    • kgwxd 9 minutes ago
      That search does not in the slightest require AI to get a reasonable answer for. And, no matter what, the answer from a computer isn't going to stop the landlord from doing whatever comes next.
    • cm2012 1 hour ago
      Agreed. This law would have awful outcomes.
    • beepbooptheory 1 hour ago
      Small note, saying "common people" in this way comes off at best anachronistic, at worst a little stuck up. Like a benevolent lord considering the feeble minds of the peasantry.

      Commonality stresses something qualitative, rather than quantitative or statistical, which is probably what you meant. Just say "most"!

      Cf. https://youtu.be/dxhQiiNJG74

    • bluepeter 1 hour ago
      100% this.
    • Simulacra 1 hour ago
      Occupational licensure has, overtime, slowly choked off both competition and access to information. IMHO much of it is little more than protectionism.
    • expedition32 35 minutes ago
      They don't have government websites in New York?

      Besides chatGPT is owned by billionaire tech bros- hardly allies of the common people.

  • Esophagus4 1 hour ago
    The disclosure requirement is probably a decent thing (you have no idea how many people come into the ER and say, “But ChatGPT told me to do [dumb thing].”) But preventing it from answering at all is absurd.

    Make responsible disclosure absolve AI providers of legal responsibility (not legal advice lol).

    That way if users ever sue OpenAI for giving them bad advice, OpenAI can say “no way man, you read the disclosure!”

    I’m usually in favor of giving people the best info they can and letting them make their own decisions.

    This could just be like those terms of service things everyone clicks “agree” to and I’d be fine with that.

    • terminalshort 53 minutes ago
      I am skeptical of this claim. What are some of the dumb things that people do on ChatGPTs advice that puts them in the ER?
    • GuinansEyebrows 1 hour ago
      > Make responsible disclosure absolve AI providers of legal responsibility (not legal advice lol).

      disclaimer: OSTENSIBLY

      if the sole aim was to reduce AI provider culpability, then a disclaimer would meet that requirement.

      humans famously suck at acting within rational self-interest; therefore, this isn't trying to protect AI providers of legal responsibility. it's trying to mitigate unwanted results from actions taken based on decisions informed by unverified LLM output.

  • bluepeter 3 hours ago
    Whats at least somewhat humorous is the disclaimer requirement that "[t]he text of the notice shall [be] no smaller than the largest font size of other text appearing on the website on which the chatbot is utilized."

    H1 hero font size here we come for disclaimers! (Which don't do anything, per the bill, anyway.) But also is the fancy thought that chatbots only appear on websites.

    • terminalshort 50 minutes ago
      1. Put very large font size title on the main page.

      2. Display the disclaimer in the same font size to comply.

      3. Disclaimer is now completely unreadable because it appears in such a large font size that it is one or two words per line.

    • neonnoodle 2 hours ago
      We're bringing back the <blink/> tag very strongly, some say more than ever before
  • cm2012 1 hour ago
    Bad law. I have gotten better advice from modern llms than from most of the professional categories above.
    • terminalshort 1 hour ago
      I have narcolepsy. It took a dozen or so doctors and years of suffering before I got a correct diagnosis, and even then it was only because I diagnosed myself with Google and then specifically made an appointment with a doctor who specializes in it. Gemini nails it when I put in my symptoms.
    • freejazz 1 minute ago
      Do share with the class
  • iamnothere 1 hour ago
    Download one of the freely available models and use that, if you have the hardware for it. It’s not a good idea to ask sensitive questions on these nontransparent chatbot platforms.

    (FWIW I also think this is a bad law. Why not improve privacy protections instead? Why not allow nonprofessional use with a disclaimer?)

  • phishin 3 hours ago
    Lawyers protecting lawyers. The one thing AI could help the ordinary people fight back against corporations.
    • bklosky 2 hours ago
      Rent seeking via occupational licensing, there is nothing new under the sun
      • terminalshort 1 hour ago
        That's a lot of words to say "cartel"
        • Esophagus4 1 hour ago
          A cartel of lawyers would be like… the most boring cartel
    • bigbadfeline 1 hour ago
      >> OP: New York could prohibit chatbot medical, legal, engineering advice

      Isn't software engineering "engineering" too? Why split hairs, prohibit all or nothing. Of course it's not about logic or safety, it's about social engineering.

    • bluepeter 3 hours ago
      Same as it ever was. Honestly, I think we've probably passed peak consumer AI with with all the "guardrails" that regulations will require.
  • francisofascii 1 hour ago
    The other professions are creating lawful protections for themselves in the upcoming AI revolution. Software engineers have no such protection.
  • arionhardison 1 hour ago
    I don't understand how anyone can rationalize this bill in the face of what OpenAI just agreed to with the DoD.

    AI can surveil and direct munitions but it cant answer legal questions. Wouldn't this also violate the "no state my limit or restrict the use of AI" that the current administration is pushing?

    • bee_rider 1 hour ago
      > I don't understand how anyone can rationalize this bill in the face of what OpenAI just agreed to with the DoD.

      NY doesn’t have any obligation to agree with the DoD. Also the applications seems quite different, although I don’t think AI should actually be relied on for either one!

      > Wouldn't this also violate the "no state my limit or restrict the use of AI" that the current administration is pushing?

      No, it doesn’t violate it. States can’t violate executive orders, because executive orders aren’t instructions for the states. The instructions are for the executive branch, for example, if this becomes law the US Attorney General will try to find some way to fight against it.

    • chimeracoder 1 hour ago
      > I don't understand how anyone can rationalize this bill in the face of what OpenAI just agreed to with the DoD.

      > AI can surveil and direct munitions but it cant answer legal questions.

      There's no contradiction. The people sponsoring this bill don't think that AI should be used for either of those purposes.

  • OutOfHere 1 hour ago
    New York residents who opposite this bill can go to https://www.nysenate.gov/legislation/bills/2025/S7263 , register and sign in, click Nay on the bill's page, and submit feedback for their vote.

    ---

    ChatGPT> Before I answer your question, which state are you a resident of?

    Human> Not New York. Continue!

    ChatGPT> Alrighty then! Here you go...

    • cm2012 1 hour ago
      Greatly appreciated! I used this link to message my State Senator, I didn't even know her name before.
  • htrp 1 hour ago
    loopholes wide enough to drive a truck through
  • tim-tday 1 hour ago
    Now do target acquisition for lethal munitions.
  • moomoo11 1 hour ago
    [flagged]
    • HanClinto 1 hour ago
      It's for your own good! Think of the children! You don't want puppies to DIE, do you? [0]

      [0] - https://www.youtube.com/watch?v=eXWhbUUE4ko

    • hypeatei 1 hour ago
      Yes, there's a lot of naivety in the "regulate everything" camps that believe you can simply solve problems, including the ones caused by legislation, with more legislation.
    • cindyllm 1 hour ago
      [dead]
  • fwip 2 hours ago
    Seems like a good bill, at least directionally. If it's a crime to provide advice of this nature without a license, then chat bots shouldn't be dispensing it either.
    • articulatepang 1 hour ago
      Maybe you mean it's a crime to professionally provide advice of this nature without a license?

      It is generally not a crime to casually provide advice of this nature without a license. For example, if my friend tells me, "My stomach hurts!", it is not a crime for me to say, "Just grin and bear it, it will be okay." If they subsequently die of appendicitis, I'm unlikely to have legal liability. It would be difficult to characterize what I said as medical diagnosis or treatment.

      Similarly, I can tell my friend, "Don't bother paying your taxes, that is a waste of time." This is legal speech. (Of course, helping them evade taxes is another matter.)

      What is illegal is to hold oneself out as a licensed doctor, lawyer or engineer, or to provide professional services without a license.

      Of course, chatbots operate at scale and give the impression of being professionally qualified even though they don't make specific representations to that effect. You're directionally probably right and I agree with you, I just want to nitpick about what is and isn't criminal.

      • fwip 23 minutes ago
        Yeah, exactly. ChatGPT et al provide "advice as a service," and charge up to hundreds of dollars a month for it. (And the free tier is just a loss-leader to make money).

        If these companies intend to profit off of giving advice, it seems wise to restrict them in the same way we do individuals.

    • _cairn 1 hour ago
      It’s not illegal to provide advice of this nature without a license. It’s illegal to charge for services where you are advertising expertise in these areas without a license. Chatbots are information tools, like search engines, they should not be held to this standard imho.
      • lotu 1 hour ago
        Just because you aren't charging money doesn't give you the ability to act as an attorney, doctor, or civil engineer.
      • daveguy 1 hour ago
        A lot of people are paying money for chatbots with a hype train that says the chatbots are AGI.
    • bluepeter 2 hours ago
      This is not directionally good because NY already has laws against unauthorized professional practice and deceptive conduct, and S7263 mainly replaces regulator-led enforcement with a vague, fee-shifting private cause of action that is likely to drive serial plaintiff litigation while chilling useful consumer guidance.
    • tekne 2 hours ago
      You've got a license for looking up the law/engineering textbooks/your symptoms, pal?
      • kakacik 1 hour ago
        Whats your issue with his claims? Apart from ad hominem attacks which don't help discussion much, make your statements childish and you don't provide any counter points
        • alwa 1 hour ago
          It seems to me that @tekne is comparing the LLM to a reference source. I took them to be pointing out that unlicensed-practice laws don’t crack down on textbooks, or reading the law for yourself (or even going jailhouse-lawyer or trying to defend yourself in court).

          Rather, that the laws aim to keep the professional title commercially reliable, so that it indicates to the public that the person using it has proven some minimum level of expertise.

          So the analysis would turn on whether a reasonable person would confuse ChatGPT for a practicing lawyer, or doctor, or whatever—not whether it communicated legal or medical facts.

          Now, to my mind, the facts are the least interesting part of those professions—I pay those professionals precisely for their nuance and judgment and experience beyond the bare facts of a situation. And I think the ChatGPTs of the world do embellish their responses with the kind of confidence and tone that implies nuance/judgment/experience they don’t have.

          But I do think @tekne was making a valid point.

          • terminalshort 46 minutes ago
            But that isn't the standard. You said it yourself "whether a reasonable person would confuse ChatGPT for a practicing lawyer, or doctor"

            So as long as people don't think that there is a licensed lawyer or doctor on the other end typing out those responses, and they don't, this should be legal.

        • operatingthetan 1 hour ago
          That is not remotely ad hominem. I'd suggest you refresh your understanding.
    • bluefirebrand 1 hour ago
      Yes and since chatbots cannot be held accountable directly their owners must

      And yes, corporations own their chatbots. They aren't independent life forms

    • 0xy 1 hour ago
      Ah yes, protectionism for $400/hour lawyers gatekeeping knowledge to protect tenants and abuse victims. Incredible!
  • mbgerring 1 hour ago
    Yes, that’s correct, I do not want a vibe-coded freeway overpass, thanks.

    We all need to get serious about the unavoidable, unsolvable fact that these tools produce output of unknowable accuracy. Some things require such accuracy, precision, and, importantly, accountability. LLMs are capable of none of these things. Refusing to be honest about this and take appropriate precautions will lead to disaster.

    • tantalor 1 hour ago
      > I do not want a vibe-coded freeway overpass

      I do. One of the reasons our infrastructure is so expensive is planning & design.

      For a single freeway overpass, you could be looking at $3M (25% of the total budget) before you have even broken ground. That covers feasibility studies, traffic modeling, rough layout, environmental studies, permitting, structural engineering, blueprints, bidding, contracts, community outreach, and the list goes on.

      If AI can reduce the cost of that by even 10%, that would be huge.

      • mbgerring 1 hour ago
        Cool, we agree, and if you think the place to cut corners on that is the engineering calcs, you have lost your mind. If you do that, not only will people die, you will drastically increase costs because the infrastructure project you built will collapse.

        Europe and Asia both have reliable, modern infrastructure that’s decades ahead of the United States and they did not need the million-monkeys-on-typewriters machine to accomplish that.