Makes me miss Ruby. Been in node typescript recently. Everything is a callback returning a promise in some weird resolution chain, mapped and conditional types, having to define schemas for everything and getting yelled at by lsp all day... Oh then you gotta write react components and worry about rerenders and undefined behavior caused by impurity in state, npm, arcane .json configs
Versus active record, mvc, yaml configs, bundler, beautiful syntax, robust and trivially extendable stdlib, amazing native debugging and cli docs out of the box, everything out of the box if you're using Rails
I do not understand why it becomes increasingly irrelevant, especially in web development. I kinda get scripting--bash and python tend to run everywhere
Lack of static types is one of the main reasons. Trying to decipher a complex ruby on rails codebase is unnecessarily difficult compared typescript. The tooling is also shit unless you use Ruby Mine.
An absolute shame given how good the functionality is baked into RoR.
Nothing ground breaking we simply deploy Buildkite agents on EC2 nodes.
As mentioned in the post, the only thing really limiting CI parallelism is the ratio of "setup time" vs "test time". If your setup time is too long, you hit diminishing returns fast.
What happened to Ruby? It was very successful at some point.
Maybe kids started using JS exclusively. But what happened to older developers? Did they move over?
Rails seemed to enable very fast prototyping and iteration. Isn't it still the case?
I see PHP usage going down, but PHP doesn't seem to have any advantages over JS, .NET, Python or Go. While Ruby coupled with Rails promised easy and rapid development.
Of course, Ruby might not be best suited for large code bases or microservices but probably 90% of the Internet are small to medium web sites.
byroot sets a great example sharing his code optimization expertise. His blog has many great improvements like this. A 7x improvement in Dir.join and similar calls?! Thank you, byroot!
> More importantly, on CI systems it’s relatively common to check out code using git, and git doesn’t care about mtime
git doesn't care about mtime, but git maintains trees whose hash changes if any constituent part of the tree changes. It'd seem tempting to check for a .git and if present use the git tree to determine whether to invalidate the cache.
Aside from the oddness of making this cache git aware, with the new implementation I suspect querying git to revalidate the cache would take longer than just rebuilding it.
Looking up the hash of a tree in git is few enough operations that I would be very surprised if that is true for all but the smallest caches. If you were to shell out to the git binary, maybe.
Absolutely yes, all over the place! Startups are building greenfield software with Rails as we speak. Loads of established businesses have Ruby applications that are quietly chugging along doing their jobs well. & Shopify, a company with $1.6 billion in annual revenue, uses Ruby _very_ heavily & also invests in the wider Ruby ecosystem.
Ruby is not without its drawbacks & drama, but it’s elegant in a way that few languages are to this day (how many JS programmers _actually_ grok prototype-based object-orientation?) & compared to NPM, RubyGems is (lately) unexciting in the best way.
For pretty much everything. My terminal is in Ruby, with a Ruby font renderer, running Ruby shell, and my editor is in Ruby, my window manager, my file manager.
(Yes, I'm taking it a bit far; my prototype Ruby compiler is self-hosting finally, so I guess sometime in the next 20 years I'll end up booting into a Ruby kernel for no good reason...)
I really like Ruby. It had a formative impact on my young programmer self, particularly the culture. So much joyful whimsy.
But like... something like a font renderer in Ruby? The thing that is incredibly cache sensitive and gets run millions of times per day on a single machine? The by far slowest step of rendering any non-monospaced UI?
It doesn't typically get run millions of times per day because in most regular uses it's trivial to cache the glyphs. I use it for my terminal, and it's not in the hot path at all for rendering, as its only run the first time any glyph is rendered at a new size. If you want to add hinting and ligatures etc., it complicates the caching, but I have no interest in that for my use, and then it turns out rendering TrueType fonts is really easy:
(Note that this is a port of the C-based renderer libschrift; the Ruby version is smaller, but much less so than "usual" when converting C code - libscrift itself is very compact)
People should. I seriously miss using it at my day job. It's not for code where type systems make things a lot more stable, but it's great for scripting and quick things. Also ORMs in ruby are truly nice, and I haven't found anything as good anywhere else.
I actually think types are an anti pattern. I’ve seen more code with type escape hatches than bugs in Ruby. The truth is if you follow TDD and good coding patterns the bugs in a dynamic environment are unlikely to show up.
Frameworks and packages, sure. I’m not sure I would agree with APIs.
ActiveAdmin is best in class, Rails is fantastic; but there’s a lot of insanity in the API for a language that “gets out of the way” and “just works”
Slice is my favorite example. (It’s been a bit since I’ve used it)
[0].slice(0, 100) == [0]
[].slice(0, 100) == …
exception? Or nil? Why does it equal []?
For a “give me an array back that starts from a given, arbitrary index, and auto-handle truncation” not having that behavior continues to confuse me from an intuitive perspective. Yes, I understand the source of it, but why?
Because [] is an array with nothing in it, and [0] is an array with something in it.
So saying “give me the array containing the first 100 elements of this array with one element” would obviously give you the array with one element back.
Saying “give me the array containing the first 100 elements of this array with zero elements” would follow that it just gives the empty array back.
On top of that, because ruby is historically duck-typed, having something always return an array or an error makes sense, why return nil when there’s a logical explanation for defined behavior? Ditto for throwing an error.
Yeah, returning an empty array is pretty much exactly what I would expect given the first example. It would be a lot weirder to me if you were allowed to give an end index past the last element only if the array happened to be non-empty.
I use Rails for many of my side projects. Because of the emphasis on convention over configuration, Rails codebases tend to be succinct with minimal boilerplate, which keeps context windows small. That in turn makes it great for agent-assisted work.
For web stuff, with server-side rendering and partials it means minimal requirement to touch the hot mess that is JavaScript, and you can build PWAs that feel native pretty easily with Hotwire.
Ruby is slow as fuck though, so there's a tradeoff there.
YJIT is amazing but for me, JRuby and TruffleRuby were the real game changers.
For anything "slow" I can put it in Sidekiq and just run the worker code with TruffleRuby.
I have high hopes for ZJIT but I think TruffleRuby is the project that proves that Ruby the language doesn't have to be slow and the project is still getting better.
If ZJIT, JRuby or TruffleRuby can get within 5-10% of speed of Go without having to rewrite code I would be very happy. I don't think TruffleRuby is far off that now.
Versus active record, mvc, yaml configs, bundler, beautiful syntax, robust and trivially extendable stdlib, amazing native debugging and cli docs out of the box, everything out of the box if you're using Rails
I do not understand why it becomes increasingly irrelevant, especially in web development. I kinda get scripting--bash and python tend to run everywhere
An absolute shame given how good the functionality is baked into RoR.
Wow! I'd love to hear more about how that's achieved
As mentioned in the post, the only thing really limiting CI parallelism is the ratio of "setup time" vs "test time". If your setup time is too long, you hit diminishing returns fast.
Maybe kids started using JS exclusively. But what happened to older developers? Did they move over?
Rails seemed to enable very fast prototyping and iteration. Isn't it still the case?
I see PHP usage going down, but PHP doesn't seem to have any advantages over JS, .NET, Python or Go. While Ruby coupled with Rails promised easy and rapid development.
Of course, Ruby might not be best suited for large code bases or microservices but probably 90% of the Internet are small to medium web sites.
git doesn't care about mtime, but git maintains trees whose hash changes if any constituent part of the tree changes. It'd seem tempting to check for a .git and if present use the git tree to determine whether to invalidate the cache.
Ruby is not without its drawbacks & drama, but it’s elegant in a way that few languages are to this day (how many JS programmers _actually_ grok prototype-based object-orientation?) & compared to NPM, RubyGems is (lately) unexciting in the best way.
(Yes, I'm taking it a bit far; my prototype Ruby compiler is self-hosting finally, so I guess sometime in the next 20 years I'll end up booting into a Ruby kernel for no good reason...)
But like... something like a font renderer in Ruby? The thing that is incredibly cache sensitive and gets run millions of times per day on a single machine? The by far slowest step of rendering any non-monospaced UI?
The Earth is weeping my brother.
https://github.com/vidarh/skrift
(Note that this is a port of the C-based renderer libschrift; the Ruby version is smaller, but much less so than "usual" when converting C code - libscrift itself is very compact)
Generally speaking Ruby has the best APIs.
ActiveAdmin is best in class, Rails is fantastic; but there’s a lot of insanity in the API for a language that “gets out of the way” and “just works”
Slice is my favorite example. (It’s been a bit since I’ve used it)
exception? Or nil? Why does it equal []?For a “give me an array back that starts from a given, arbitrary index, and auto-handle truncation” not having that behavior continues to confuse me from an intuitive perspective. Yes, I understand the source of it, but why?
So saying “give me the array containing the first 100 elements of this array with one element” would obviously give you the array with one element back.
Saying “give me the array containing the first 100 elements of this array with zero elements” would follow that it just gives the empty array back.
On top of that, because ruby is historically duck-typed, having something always return an array or an error makes sense, why return nil when there’s a logical explanation for defined behavior? Ditto for throwing an error.
Seems thoughtfully intuitive to me.
[0, nil, nil, nil, …x100, nil] is the same as [0] in terms of access.
In both cases, trying to access the 100th element (e.g. [0][100]) will give nil.
[].slice(0, 100).each do |x| puts x end
that shouldn't be an error and it seems to be the principle of least surprise imo.
For web stuff, with server-side rendering and partials it means minimal requirement to touch the hot mess that is JavaScript, and you can build PWAs that feel native pretty easily with Hotwire.
Ruby is slow as fuck though, so there's a tradeoff there.
For anything "slow" I can put it in Sidekiq and just run the worker code with TruffleRuby.
I have high hopes for ZJIT but I think TruffleRuby is the project that proves that Ruby the language doesn't have to be slow and the project is still getting better.
If ZJIT, JRuby or TruffleRuby can get within 5-10% of speed of Go without having to rewrite code I would be very happy. I don't think TruffleRuby is far off that now.
when I touch js, and python... I prefer ONLY AI agentic style of working.