How do you count to infinity in a computer while keeping all the precision?
Solve this and you win a Noble Prize.
It is possible, just not immediately obvious. It's impossible to do it in a fixed-length format, because of obvious reasons; however, you could have a system for integers where the high bit of each byte signals that there's another byte that will come after it.
Unfortunately, no one's ever come up with this and called it something like "Variable-Length Quantity" or "VarInt".The same thing would be possible for floating-point numbers, it's just even less obvious than for integers. There's also the problem that floating-point numbers only terminate for numbers whose denominators are a power of two, which means that if you tried to store one-third in a VarFloat, you'd run into problems with little things, like all your memory being allocated to that one VarFloat.
What you could do instead is have two VarInts representing a fraction, where one stores the numerator and the other stores the denominator. However, this runs into the same problem with irrational numbers. Ultimately, the problem with storing numbers is that we don't want to store the entire number, we just want to store however much of the number we need to do our calculations.