so it's ok to say "SSD read/write speed", but now that we have something closer to the original meaning of the word, someone always has to point out that "LLMs don't have a soul" (or whatever you think is required for it to count as akchyually reading)
The ancestry finding is the most honest part of this post — training on a uniform blendshape mix but inferring with the same fixed mix was essentially a 3 kg noise floor they built themselves. Elegant fix: just add ancestry to the questionnaire so train/inference distributions match.
The physics-aware loss is interesting too. Including the Anny forward pass so mass gradients flow back through all volume-related params together — rather than solving each of the 58 outputs independently like Ridge — is exactly the right call. Ridge can’t couple params; the MLP hidden layers can.
I’ve been thinking about a similar problem from the opposite direction — instead of reconstructing bodies, I’m working on running small models on very constrained hardware (NanoMind — 2GB RAM Android phones). The “boring model, interesting data pipeline” lesson resonates strongly. Upstream data quality always matters more than architectural complexity.
I don't understand why the height and weight errors aren't 0 when they are known inputs? If I say how tall I am, why is the model estimating something else?
That's a common phenomenon in model fitting, depending on the type of model. In both old school regression and neural networks, the fitted model does not distinguish between specific training examples and other inputs. So specific input-output pairs from the training data don't get special privilege. In fact it's often a good thing that models don't just memorize inputt-output pairs from training, because that allows them to smooth over uncaptured sources of variation such as people all being slightly different as well as measurement error.
In this case they had to customize the model fitting to try to get the error closer to zero specifically on those attributes.
It takes more like 10 seconds. For a large range of height and weight inputs crossed with all option combinations, you could precompute ~10M measurements and return results basically instantly.
Interesting idea. Using a questionnaire as input for an MLP makes sense but the real challenge is designing questions that capture useful signal instead of noise. If that part is done well, the approach has a lot of potential.
My guess, the article itself is clearly AI authored and there are a fair number of us who don't particularly like the writing style. Further, it implies something about the original human's own valuation of this work - if they decided to let the machine handle it, why should I spend my own time reading what they didn't bother to write?
I'm guessing the writing is AI-assisted (there's no fluidity and it has some weirdly placed phrases) but I see they're in Poland and likely not English-language first?
MLP trained on 8 questions achieves ~0.3cm height error, ~0.3kg weight error, and ~3-4cm for bust/waist/hips measurements.
https://www.mdpi.com/1424-8220/22/5/1885 + some hacking => "we want to productize this"
Haven't seen that one yet. I like it.
That's not how it reads because there is a semicolon in there. It means "This is AI, so I didn't read it".
Also, I'm getting nitpicky here, but LLMs don't ”read".
so it's ok to say "SSD read/write speed", but now that we have something closer to the original meaning of the word, someone always has to point out that "LLMs don't have a soul" (or whatever you think is required for it to count as akchyually reading)
do storage devices have souls?
In this case they had to customize the model fitting to try to get the error closer to zero specifically on those attributes.
> Averages lie about the tails, and a person who gets a 15 cm bust error doesn’t care that the mean is 4 cm.
A variation of that sentence should be mandatory in every scientific paper.
This is definitely manipulated.