Repo for Mixture of Kolmogorov - Arnold BitNets, a proposal for the building blocks of the next version of pure transformer (for a lot of compute, as MixKABRNs are more suited for enhancement oriented performance).
By opting into BitNet architecture, you go to very small weights, and the Kolmogorov Arnold part seems to take care of the activation functions, so you end up with the pure architecture scaling up (perhaps after training, on the finetuning introduce a "consciousness" that goes from bitwise ([-1,0,1]) to another precision by adding its own layer on top of the bitwise operations, just rambling, or viceversa on the MixKABRN project?).