I'm not op but I don't think op meant to shame, I understand the construction "tell me you're... without telling me" as a way to highlight that something is unexpected to people who haven't done something, that is that something is particularly unintuitive without some special experience.
actually, it really is not neccesarily a 'hardware company' thing. ive been in 'hardware companies' where the rtl was just as available for viewing as the rest of the firmware/software.
in big hardware companies, things start getting siloed, but that probably has more to do with big companies (seemingly invariably) operating as a union of fiefdoms (dunbar-number-ification?)
The recent news is that Apple is supposedly replacing the Core ML framework with an updated version that will make it easier to integrate third party LLMs into your apps.
> the company is also planning a few other software-based AI upgrades, including a new framework called Core AI. The idea is to replace the long-existing Core ML with something a bit more modern.
Reverse Engineering with AI is only going to get better. I have seen some crazy things friends of mine have done with Claude alone. Let's just says SaaS isn't the only industry that could one day suffer.
We've got about a year before so many people are interacting with LLMs on a daily basis that its style starts to reverse infect human speech and writing
Great insight – Would you like to try and identify some specific "AI-isms" that you've noticed creeping into your own writing or your colleagues' emails lately?
Also the Prior Art section, which has telltale repetition of useless verbs like "documenting," "providing insight into," and "confirming" on each line. This was definitely AI-written, at least in part.
> Throughout this series, “we” refers to maderix (human) and Claude Opus 4.6 (by Anthropic) working as a pair. The reverse engineering, benchmarking, and training code were developed collaboratively
Sure, "collaboratively." Why would I ever trust a vibe coded analysis? How do I, a non expert in this niche, know that Opus isn't pulling a fast one on both of us? LLMs write convincing bullshit that even fools experts. Have you manually verified each fact in this piece? I doubt it. Thanks for the disclaimer, it saved me from having to read it.
Humans also write endless amounts of convincing bullshit, and have done since time immemorial. False papers and faked results have been a growing scourge in academia before LLMs were a thing, and that's just counting the intentional fraud - the reproducibility crisis in science, especially medical and psychological science, affects even the best designed and well intentioned of studies.
Humans also make mistakes and assumptions while reverse engineering, so it will always need more engineers to go through the results, test things
I have always wondered if the neural engine could be used for training - pretty excited for part 3 of this to see if the juice is actually worth the squeeze
I remember the good old days when Apple was desperate for developers and produced great documentation and there were a lot of great 3rd-party books too. You can't just give out awards in hopes that someone will make that great app.
If you strip away the branding, Apple has and continues to ship a ton of algorithms that likely use the ANE and end users can use CoreML to do the same.
Just some things that people will likely take for granted that IIRC Apple have said use the ANE or at least would likely benefit from it: object recognition, subject extraction from images and video, content analysis, ARKit, spam detection, audio transcription.
Don’t forget FaceID and many of the image manipulation.
And while everyone else went to more powerful giant LLMs, Apple moved most of Siri from the cloud to your device. Though they do use both (which you can see when Siri corrects itself during transcription—you get the local Siri version corrected later by the cloud version).
I just wanted to say that you’ve done an excellent job and am looking forward to the 3rd installment.
Why did you guys remove the ability to detach the console and move it to another window?
in big hardware companies, things start getting siloed, but that probably has more to do with big companies (seemingly invariably) operating as a union of fiefdoms (dunbar-number-ification?)
6.6 FLOPS/W, plus the ability to completely turn off when not in use, so 0W at idle.
The big takeaway isn't reverse engineering the ANE per se, but what Manjeet could do with his software engineering skills when accelerated by AI.
This is a good example of the present state of software engineering. Not future state - present state.
> the company is also planning a few other software-based AI upgrades, including a new framework called Core AI. The idea is to replace the long-existing Core ML with something a bit more modern.
https://www.bloomberg.com/news/newsletters/2026-03-01/apple-...
- The key insight - [CoreML] doesn't XXX. It YYY.
With that being said, this is a highly informative article that I enjoyed thoroughly! :)
The article links to their own Github repo: https://github.com/maderix/ANE
It's not my subject, but it reads as a list of things. There's little exposition.
People seem to be going around pointing out that people talk like parrots, when in reality it's parrots talk like people.
Did you develop your own whole language at any point to describe the entire world? No, you, me, and society mimic what is around us.
Humans have the advantage, at least at this point, of being a continuous learning device so we adapt and change with the language use around us.
Here is why you are correct:
- I see what you did there.
- You are always right.
Sure, "collaboratively." Why would I ever trust a vibe coded analysis? How do I, a non expert in this niche, know that Opus isn't pulling a fast one on both of us? LLMs write convincing bullshit that even fools experts. Have you manually verified each fact in this piece? I doubt it. Thanks for the disclaimer, it saved me from having to read it.
Humans also make mistakes and assumptions while reverse engineering, so it will always need more engineers to go through the results, test things
Efficiency is the question.
Just some things that people will likely take for granted that IIRC Apple have said use the ANE or at least would likely benefit from it: object recognition, subject extraction from images and video, content analysis, ARKit, spam detection, audio transcription.
And while everyone else went to more powerful giant LLMs, Apple moved most of Siri from the cloud to your device. Though they do use both (which you can see when Siri corrects itself during transcription—you get the local Siri version corrected later by the cloud version).