4 comments

  • ValdikSS 1 hour ago
    That's why LLM will eventually be used only for initial interaction between the user in their language, to prepare the data to a specialized model.

    Imagine face recognition to work like a text chat, where the PC gets the frame from the camera and writes in the chat: "Who's that? Here's the RGB888 image in hex: ...".

  • westurner 46 minutes ago
    Wouldn't this be faster with an agent skill that has code?

    /skill-creator [or /create-skill] Write an agent skill with code script(s) that use an existing user space IP library that works with your agent runtime, to [...]

    ComposioHQ/awesome-claude-skills: https://github.com/ComposioHQ/awesome-claude-skills

    anthopics/skills//skill-creator/SKILL.md: https://github.com/anthropics/skills/blob/main/skills/skill-...

    /.agents/skills/skill-name/SKILL.md, scripts/{script_name.py,__init__.py}

    https://agentskills.io/what-are-skills

    • trollbridge 28 minutes ago
      Well, yeah, of course it would be.

      Even faster would just to be use code in the first place!

  • brcmthrowaway 1 hour ago
    Next up: Claude replacement to handle simdjson processing.