17 comments

  • aaclark 10 hours ago
    ai;dr

    MLP trained on 8 questions achieves ~0.3cm height error, ~0.3kg weight error, and ~3-4cm for bust/waist/hips measurements.

    https://www.mdpi.com/1424-8220/22/5/1885 + some hacking => "we want to productize this"

    • endofreach 9 hours ago
      > ai;dr

      Haven't seen that one yet. I like it.

      • wholinator2 2 hours ago
        I feel like tl:ai would fit better because ai:dr reads, "ai, didn't read" but presumably the ai did read, while "too long, ai" fits the action better
        • bogwog 1 hour ago
          > because ai:dr reads, "ai, didn't read"

          That's not how it reads because there is a semicolon in there. It means "This is AI, so I didn't read it".

          Also, I'm getting nitpicky here, but LLMs don't ”read".

          • anuramat 41 minutes ago
            > LLMs don't read

            so it's ok to say "SSD read/write speed", but now that we have something closer to the original meaning of the word, someone always has to point out that "LLMs don't have a soul" (or whatever you think is required for it to count as akchyually reading)

            do storage devices have souls?

    • mcphage 3 hours ago
      What does it mean to have 0.3cm height error, when height is one of the 8 questions?
  • xenonite 9 hours ago
    Well sorry no, because already the torso to leg length ratio is covered by none of their question. (and yes, they list it as a limitation)
  • minhajulmahib 2 hours ago
    The ancestry finding is the most honest part of this post — training on a uniform blendshape mix but inferring with the same fixed mix was essentially a 3 kg noise floor they built themselves. Elegant fix: just add ancestry to the questionnaire so train/inference distributions match. The physics-aware loss is interesting too. Including the Anny forward pass so mass gradients flow back through all volume-related params together — rather than solving each of the 58 outputs independently like Ridge — is exactly the right call. Ridge can’t couple params; the MLP hidden layers can. I’ve been thinking about a similar problem from the opposite direction — instead of reconstructing bodies, I’m working on running small models on very constrained hardware (NanoMind — 2GB RAM Android phones). The “boring model, interesting data pipeline” lesson resonates strongly. Upstream data quality always matters more than architectural complexity.
  • RobotToaster 7 hours ago
    Tangential, but does anyone else keep reading "MLP" as "my little pony".
  • sorenjan 5 hours ago
    I don't understand why the height and weight errors aren't 0 when they are known inputs? If I say how tall I am, why is the model estimating something else?
    • gwerbin 3 hours ago
      That's a common phenomenon in model fitting, depending on the type of model. In both old school regression and neural networks, the fitted model does not distinguish between specific training examples and other inputs. So specific input-output pairs from the training data don't get special privilege. In fact it's often a good thing that models don't just memorize inputt-output pairs from training, because that allows them to smooth over uncaptured sources of variation such as people all being slightly different as well as measurement error.

      In this case they had to customize the model fitting to try to get the error closer to zero specifically on those attributes.

  • rgovostes 11 hours ago
    It takes more like 10 seconds. For a large range of height and weight inputs crossed with all option combinations, you could precompute ~10M measurements and return results basically instantly.
  • faangguyindia 8 hours ago
    It has that kind of feel as if it's made in codex.
  • dalmo3 6 hours ago
    AI or not, I liked this bit:

    > Averages lie about the tails, and a person who gets a 15 cm bust error doesn’t care that the mean is 4 cm.

    A variation of that sentence should be mandatory in every scientific paper.

  • woohin 6 hours ago
    Interesting idea. Using a questionnaire as input for an MLP makes sense but the real challenge is designing questions that capture useful signal instead of noise. If that part is done well, the approach has a lot of potential.
  • tears-in-rain 4 hours ago
    from the title, i thought that will be akinator that produce you some images by image-v2
  • ggm 9 hours ago
    How big are the pockets and is it sex determined?
  • 0x1da49 5 hours ago
    [dead]
  • vijgaurav 4 hours ago
    [dead]
  • moralestapia 8 hours ago
    This is the best UI/UX article I've read this year. If the authors are around, I extend them my dearest congratulations ^^.
    • moralestapia 4 hours ago
      Like ... who/why would downvote this?

      This is definitely manipulated.

      • luma 3 hours ago
        My guess, the article itself is clearly AI authored and there are a fair number of us who don't particularly like the writing style. Further, it implies something about the original human's own valuation of this work - if they decided to let the machine handle it, why should I spend my own time reading what they didn't bother to write?
  • zimpenfish 10 hours ago
    I'm guessing the writing is AI-assisted (there's no fluidity and it has some weirdly placed phrases) but I see they're in Poland and likely not English-language first?