My Workflow Is 70% AI, 20% Copy-Paste, 10% Panic. What's Yours?

28 points | by jamessmithe 19 hours ago

36 comments

  • mrkeen 19 hours ago
    I skip reading anything "written" by my analyst and get on with the work.

    If I have a question I can just ask ChatGPT, perplexity and Gemini.

    • coldtea 19 hours ago
      >If I have a question I can just ask ChatGPT, perplexity and Gemini

      Which get their knowledge (training data) on relevant topics from analysts. Which increasingly use ChatGPT and the rest to produce them.

      Enough loops of this, and analyst writings and ChatGPT responses on market analysis will soon reach the same "useless bullshit" parity.

      • bbarnett 18 hours ago
        Indeed. I get the vibe that the submission author doesn't really check the results of LLM output either. Which of course speeds this process up.
  • miah_ 18 hours ago
    0% AI, 80% YAML Jockey, 10% SSH Shenanigans, 10% Python programming

    Been doing sysadmin since the 90's. Why bother with AI, it just slows me down. I've already scripted my life with automation. Anything not already automated probably takes me a few minutes, and if longer I'll build the automation. Shell scripts and Ansible aren't hard.

    • eldavojohn 17 hours ago
      What if you took those scripts that you have used to automate your life, dumped them into something like cursorAI and asked the model to refine them, make them better, improve the output, add interactivity, generalize them, harden them, etc?

      Sometimes when I have time to play around I just ask models what stinks in my code or how it could be better with respect to X. It's not always right or productive but it is fun, you should try it!

      • blibble 13 hours ago
        > add interactivity

        just what I want, interactivity in my ansible playbook

        > It's not always right or productive but it is fun, you should try it!

        yey, introducing bugs for literally no reason!

    • JustExAWS 7 hours ago
      And get off your lawn?

      I’ve been developing professionally since 1996 and started on Dec Vax and Stratus VOS mainframes in Fortran and C, led the build out of an on prem data center with raised floors etc to hold a SAN with a whopping 3TBs of storage along with other networking gear and server software.

      Before I started developing professionally, I did assembly language on 65C02, 68K, PPC and x86 for 10 years.

      In between then and now, I’ve programmed professionally in C, C++, VB6, Perl, Python, C#, and JavaScript.

      Now all of my work is “cloud native” from development to infrastructure and take advantage of LLMs extensively.

      It’s not a mark of honor to brag about you don’t use the latest tools.

  • ninkendo 17 hours ago
    I spent the entirety of yesterday, from around 8:30 until almost exactly 5pm, doing a relatively straightforward refactor to change the types of every identifier in our system, from the protobuf the database, from a generic UUID type to a distinct typesafe wrapper around UUID for each one. This is so that passing ID’s to functions expecting identifiers for one particular type vs another, is less error prone.

    It was a nonstop game of my IDE’s refactoring features, a bunch of `xargs perl -pi -e 's/foo/bar/;', and repeatedly running `cargo check` and `cargo clippy --fix` until it all compiled. It was a 4000+ line change in the end (net 700 lines removed), and it took me all of that 8.5 hours to finish.

    Could an AI have done it faster? Who knows. I’ve tried using Cursor with Claude on stuff like this and it tends to take a very long time, makes mistakes, and ends up digging itself further into holes until I clean up after it. With the size of the code base and the long compile times I’m not sure it would have been able to do it.

    So yeah, a typical day is basically 70% coding, 20% meetings, and 10% slack communication. I use AI only to bounce ideas off of, as it seems to do a pisspoor job of maintenance work on a codebase. (I rarely get to write the sort of greenfield code that AI is normally better at.)

  • nabla9 18 hours ago
    I hope you come back 2 years from now and let us know if you still have a job and how your wage has been developing. The way you describe your workflow does not look promising.
  • al_borland 14 hours ago
    It sounds like you will be forever limited by the AI, and easy to replace.

    This should be concerning.

  • enceladus06 1 hour ago
    This afternoon 2025-09-10 I managed to get cellpose 4.06 running on my m4 mac with metal performance shaders [mps] for optimization in a mamba environment. When I tried to train a model tho, cellpose gave problems.

    So I used claude4 to search for a solution. It said downgrade to 4.04. TLDR It worked. This whole process took like 30seconds, much faster than manually googling. Yes this is just one anecdote, but LLMs have sped my workflow up a bit.

  • mnky9800n 18 hours ago
    I spend my time like this

    - reading papers, blogs, articles, searching google scholar, and chatting with perplexity about them to help find other papers

    - writing research proposals based on my reading and previous research

    - analysing data lately this means asking Claude code to generate a notebook for playing around with the data

    - writing codes to explore some data or make some model of data, this also has a lot of Claude code interaction these days

    - meetings, slack, email

    - doing paper and proposal reviews which includes any or all of the above tasks plus writing my opinion in whatever format is expected

    - travelling somewhere to meet with colleagues at a conference or their workplace to do some collaboration that includes any or all of the above plus also giving talks

    - organising events that bring people together to do any to all of the above together

    I’m a soft money research scientist with a part time position in industry working as a consultant.

  • neonnoodle 18 hours ago
    If I were your boss I would fire you.
  • skydhash 18 hours ago
    As a developer, it's a lot of docs and code reading. And writing reports on tickets. Sometimes some deep planning. Writing code is the guilty pleasure of the day.
  • clbrmbr 18 hours ago
    I’m not an analyst but a product owner and developer. Sometimes I feel similarly. However, I notice a very interesting thing when I turn the AI off for a day and go back to the pre-copilot days: my work is more focused, concise, comprehensible, impactful, and memorable.

    Still I have this feeling that AI is very close to “doing my work” but yet when I step back I see it may be a rather seductive mirage.

    Very unclear. Hard to see with the silicon-colored glasses on.

  • SirensOfTitan 18 hours ago
    I use AI almost exclusively for search, and usually force myself to grind against a problem a little before engaging it. I treat AI as a smart codemod tool when I do use it for software development: against easily verifiable, well-defined tasks, low mental effort but higher time commitment tasks.

    I keep a list of "rules of engagement" with AI that I try to follow so it doesn't rob me of cognitive engagement with tasks.

  • MrContent04 18 hours ago
    Mine is probably 60% AI, 25% structured research, 10% copy-paste, and 5% staring at the screen until inspiration strikes. AI helps me brainstorm and speed up repetitive tasks, but I always double-check and refine everything manually. The “panic” part is real though — especially when deadlines creep up faster than expected. Curious to see how others balance between AI assistance and good old-fashioned problem-solving.
  • Aldipower 18 hours ago
    90% Bazel, 5% AI, 5% day dreaming.

    Not sure if the Bazel or AI part is worse. :-D I think Bazel.

  • Prcmaker 18 hours ago
    50% existential crisis, 50% meetings, 50% work a grad should be doing.

    I've just taken a week off to help extended family with a project, and it's reminded me what a good job is.

  • simianwords 18 hours ago
    Where's the part where you verify what AI has given?
  • ontouchstart 18 hours ago
    When we write programs that "learn", it turns out that we do and they don't. —- Alan Perlis

    So what do you learn?

  • paulcole 3 hours ago
    100% AI. I don’t even bother trying to understand the code anymore. It’s faster to test and then iterate with another prompt.
  • sitzkrieg 10 hours ago
    it is pure comedy someone has this job over another who could actually perform the work and felt the need to post this
  • emehex 18 hours ago
    My flow (with legacy software) is: manual strip > LLM > manual clean up > repeat
  • Applejinx 18 hours ago
    Mine is doing the work… o_O
    • hoppp 18 hours ago
      Same.

      People these days do everything to avoid actually programing but still they wanna call themselves a programmer

      • tiborsaas 18 hours ago
        Programming is much more than typing code on a keyboard.
    • fuckyah 18 hours ago
      [dead]
  • codesnik 18 hours ago
    I really hope you don't just get your data from ChatGPT
  • singularity2001 17 hours ago
    80% AI 10% guidance 10% angry guidance and 1% panic
  • koakuma-chan 18 hours ago
    Ask HN: will my boss prompt AI himself?
  • oaiey 18 hours ago
    As an analyst it is your job to prepare valuable information to other. If you drop AI generated stuff unreflected, uncorrected and outdated at people, you will loose your job. I am starting to reject meeting minutes created by AI which are not understood by the writer and are not polished.

    AI is a tool for you to create better results not an opportunity to offload thinking to others (like it is now done so often)

    • everdrive 17 hours ago
      You're correct, but I'm sure his boss feels differently. In my experience businesses really care very little for precision and excellence in your work product. I don't just mean they cannot recognize excellence (although this is true as well) but that they actively dislike precision and would lump a lot of that sort of effort all into one bucket they might describe as "being too pedantic," or "not seeing the bigger picture," or some other epithet which _could_ be true, but in practice just means "I just want stuff done quickly and don't care if it's half-assed. We just need to report success to management which will never know or care about the truth."
      • i7l 17 hours ago
        This. Most bosses are so obsessed with applying Pareto's 80/20 rule in all situations (even when it does not apply) that most would trade velocity for accuracy without thinking. Frankly, I doubt the average manager would know wrong data when confronted with it.
    • dgroshev 18 hours ago
      That's precisely it.

      Previously, we always had the output of office work tightly associated with the accountability that the output implies. Since the output is visible and measurable, but accountability isn't, when it became possible to generate plausible-looking professional output, most people assumed that that's all there is to the work.

      We're about to discover that an LLM can't be chastened or fired for negligence.

  • pornel 17 hours ago
    I didn't think LLM's sycophancy would work on me.

    And yet, I've realized that a few research and brainstorming sessions with LLMs I thought were really good and insightful were just the LLM playing "yes and" improv with me, and reinforcing my beliefs, regardless whether I was right or wrong.

  • WJW 18 hours ago
    My workflow is probably 20% coding, 50% thinking about how to code whatever needs fixing, 10% looking at metrics and 20% getting distracted. AI has proven almost entirely useless for the type of disentangling spaghetti code that makes up most of my work. Then again I'm not an analyst so you do you.
    • lvncelot 18 hours ago
      Same boat, and if anything AI has made the "disentangling spaghetti code" part even more annoying. In the past, badly designed code at least hat the decency to also look bad on first glance.
  • moron4hire 17 hours ago
    In any given week I spend 50 - 60% of my time in meetings. Half of that time is listening to PMs madlib ideas for how AI is going to do everything for us and and the other half is spent listening to junior developers and analysts make excuses for why they haven't gotten anything done in the last week, despite using AI to try to get their jobs done. Across 5 projects employing 15 people, I am the only senior developer and have as much experience as everyone else combined.

    I spend 20 - 30% of my week on administrative paperwork. Making sure people are taking their required trainings. Didn't we just do the cyber security ones? Yes, we did, but IT got hacked and lost all the records that we did, so we need to do it again.

    I spend 10 - 20% of my week trying to write documentation that Security tells me is absolutely required but has never gotten me any answers from them on whether they are going to approve any of my applications for deployment. In the last 2 years, I've gotten ONE application deployed and I had to weaponize my org chart to get it to happen.

    That leaves me about -10 - 20% of the week to get the vast majority of all of the programming done on our projects. Which I do. If you look at the git log, my name dominates.

    I don't use AI to write code because I don't have time to dick around with bad results.

    I don't use AI to write any of my documentation or memos. People generally praise my communication skills for being easy to read. I certainly don't have time to edit AI's shitty writing.

    The only time I use AI is when someone from corporate asks me to "generate an AI-first strategy for blah blah blah". I think it's a garbage initiative so I give them garbage work. It seems to make them happy and then they go away and I go back to writing all the code by hand. Even then, I don't copy-paste the response, I type it out long while reading it, just in case anyone asks me any questions later. Despite everyone telling me "typing speed isn't important to a software developer," I type around 100WPM, so it doesn't take too long. Not blazing fast, but a lot faster than every other developer I know.

    So, forgive me if I don't have a lot of sympathy for you. You sound like half the people in my company, claiming AI makes them more productive, yet I can't see anywhere in any hard artifacts where that productivity has occurred.

  • Simulacra 17 hours ago
    About 50% AI, 30% synthesizing what I found, 10% copy paste, 10% manual research.
  • surgical_fire 18 hours ago
    10-20% AI. Typically for boring and repetitive coding tasks. Also for some quick reseach, bouncing off ideas, or finding different ways to do certain things.

    The rest is actual coding (where using AI typically slows me down), design, documentation, handling production incidents, monitoring, etc.

  • blibble 18 hours ago
    be prepared to be laid off
  • fsflover 18 hours ago
    You should add "Ask HN:" to the title.
    • 4ndrewl 18 hours ago
      Gemini would have told them that...
  • satisfice 18 hours ago
    1% AI for productive work. My work is training, developing training, experimental testing and writing experimental test tools that become part of my training. The training is all about software testing.

    I find that it’s easier to write code than to write English statements describing code I want written.

    I can’t phone this work in. It has to be creative and also precise.

    I know no way to design useful training experiences using AI. It just comes out as slop.

    When I am coding, I use Warp. It often suggests bug fixes, ajd I do find that these are worth accepting, generally speaking.

  • blitzar 18 hours ago
    I generally come in at least fifteen minutes late after that I sorta space out for an hour. I just stare at my desk, but it looks like I'm working. I do that for probably another hour after lunch too, I'd say in a given week I probably only do about fifteen minutes of real, actual, work.
    • martypitt 18 hours ago
      What if -- and this is a hypothetical -- you were offered some kind of "stock option" or "equity share" scheme?
      • blitzar 18 hours ago
        Good luck with your layoffs. I hope your firings go really well.
    • satisfice 18 hours ago
      Office Space quote?

      You’re the kind of go getter that has upper management written all over you.

      • blitzar 17 hours ago
        Um, I'm gonna need you to go ahead and come in tomorrow. So if you could be here around 9:00, that would be great. Mm-Kay
    • ipnon 18 hours ago
      Isn’t this basically utopia come? Why the doom and gloom? You’re getting paid to do practically nothing.
      • mnky9800n 18 hours ago
        I never understood how George Jetson has so much friction with his boss when all he has to do is press the button.
      • mrbombastic 18 hours ago
        This is a quote from “Office Space” by Mike Judge of “King of the Hill” and “Silicon Valley” fame. it is a great movie you should check it out!
        • surgical_fire 18 hours ago
          > Mike Judge of “King of the Hill” and “Silicon Valley”

          And, more importantly, Beavis and Butthead.

  • yuyu74189w 17 hours ago
    [dead]
  • hoppp 18 hours ago
    So your work can be automated by AI. Don't tell your boss or you are fired
    • james_marks 18 hours ago
      OP listed N tools they stitch together in a creative and thoughtful way (“30% brainstorming”) which happen to leverage AI.

      What’s the fireable offense? Does the boss want to stitch those tools together themselves?

      If the output is crap- regardless of the tool- that’s a different story, and one we don’t have enough info to evaluate.

      • hoppp 18 hours ago
        There has to be no offence, its cost reduction for the company.

        It depends how mission critical his brainstorming is for the company. LLMs can brainstorm too.

        • james_marks 15 hours ago
          My latest take is: AI amplifies human intent. For now at least, it very much needs someone with vision to guide and leverage it, and this can easily be a full time job.

          That means OP’s job may be _safer_, because they are getting higher leverage on their time.

          It’s their colleague who’s ignoring AI that I see as higher risk.