SBCL uses a single zero bit to tag integers. This trick means the representation of n is just 2n, so you can add the values directly without any decoding.
It obviously also means that all the other tag values have to use 1 as the last bit.
That also implies that NIL cannot be represented as 0, which is a pity since testing the Z flag would be quick. I'd think someone ran the numbers and found the chosen encoding superior, but that would have been long ago (in the CMUCL code base).
I suspect with modern CPU pipelining and branch prediction that most of the (very interesting) debates about exactly how to tag values and pointers have become inconsequential above the noise level.
Would love to see recent work demonstrating this isn’t true!
Traditionally it has been done because the last three bits in an object pointer typically are always zero because of alignment, so you could just put a tag there and mask it off (or load it with lea and an offset, especially useful if you have a data structure where you'd use an offset anyway like pairs or vectors). In 64-bit architectures there are two bytes at the top that aren't used (one byte with five-level paging), but they must be masked, since they must be 0x00 or 0xff when used for pointers. In 32-bit archs the high bits were used and unsuitable for tags. All in all, I think the low bits still are the most useful for tags, even if 32-bit is not an important consideration anymore.
The sibling comment explains why we prefer to use the lower bits as a tag (these are guaranteed to be zero if the value is a pointer on a 64-bit system).
Another reason why we wouldn’t want to use the top bit is that, as the parent comment suggested, the tagged pointer representation of a fixnum integer isn’t a pointer at all but is instead twice the number it represents. Generally speaking, we represent integers in twos-complement representation which uses that top bit to determine if the value is positive or negative.
Symbols are just list of numbers. Variable is just a nameless place in memory, but often associated with a symbol.
Numbers in symbols are printed out as ASCII-characters when it seems appropriate, like after SETQ.
Or we could decide that number-list that ends with 0 and contains only range(0x21,0x7F) is printed out as symbol. Does not matter, it is just syntactic sugar.
And We do not need strings for much anything. We could of course decide that number-list with ord('"') is printed out as string. The reader could also follow this protocol.
I had all this figured out at one time. And I dont remember any major issues. B-)
FYI thecloudlet, the last quote from a Reddit user at the end seems to have duplicated content (copy-paste error?)
I read both articles and am looking forward to your next! I’d be interested in understanding more about the relationship of EMacs to GCC since you noted the authors were the same and the internals were written with some compiler awareness.
Thank you for the kind reminder! I have removed the duplicate.
You made a great point. Since the original authors are the same, the fundamentals of the Emacs C core are indeed highly compiler-optimized. I hope I can come up with a way to fully understand and write about that history and relationship. (But to be honest, diving into that level of compiler history is a really hard topic to tackle!)
Thanks for the great inspiration and for taking the time to read!
Not AI, but I studied it extensively for about 6 months. I was trying to port Emacs to JS, line by line, about eight years ago.
I love Emacs' design. I think the cruft is minimal, and pretty much every line of code I studied had a good reason for being there.
And I also think there's a lot to learn from studying how Emacs is implemented. Few people will probably do this, but it was a nice experience for me. I learned a lot about gaps, text properties, how buffers are implemented, how the eval function works (it's surprisingly complicated between buffer-local variables and thread-local variables, but it's hard to think of a simpler alternative), and how intervals are implemented.
It obviously also means that all the other tag values have to use 1 as the last bit.
Would love to see recent work demonstrating this isn’t true!
Another reason why we wouldn’t want to use the top bit is that, as the parent comment suggested, the tagged pointer representation of a fixnum integer isn’t a pointer at all but is instead twice the number it represents. Generally speaking, we represent integers in twos-complement representation which uses that top bit to determine if the value is positive or negative.
Symbols are just list of numbers. Variable is just a nameless place in memory, but often associated with a symbol.
Numbers in symbols are printed out as ASCII-characters when it seems appropriate, like after SETQ.
Or we could decide that number-list that ends with 0 and contains only range(0x21,0x7F) is printed out as symbol. Does not matter, it is just syntactic sugar.
And We do not need strings for much anything. We could of course decide that number-list with ord('"') is printed out as string. The reader could also follow this protocol.
I had all this figured out at one time. And I dont remember any major issues. B-)
I read both articles and am looking forward to your next! I’d be interested in understanding more about the relationship of EMacs to GCC since you noted the authors were the same and the internals were written with some compiler awareness.
You made a great point. Since the original authors are the same, the fundamentals of the Emacs C core are indeed highly compiler-optimized. I hope I can come up with a way to fully understand and write about that history and relationship. (But to be honest, diving into that level of compiler history is a really hard topic to tackle!)
Thanks for the great inspiration and for taking the time to read!
I love Emacs' design. I think the cruft is minimal, and pretty much every line of code I studied had a good reason for being there.
And I also think there's a lot to learn from studying how Emacs is implemented. Few people will probably do this, but it was a nice experience for me. I learned a lot about gaps, text properties, how buffers are implemented, how the eval function works (it's surprisingly complicated between buffer-local variables and thread-local variables, but it's hard to think of a simpler alternative), and how intervals are implemented.