A bit is short for “binary digit.” It is basically how a computer stores and makes references to data, memory, etc. A bit can have a value of 1 or 0, that’s it. So binary code is streams of 1’s and 0’s, such as this random sequence 100100100111. These bits are also how your processor does calculations. By using 32 bits your processor can represent numbers from 0 to 4,294,967,295 while a 64-bit machine can represent numbers from 0 to 18,446,744,073,709,551,615. Obviously this means your computer can do math with larger numbers, and be more efficient with smaller numbers.
so, imo, i dont know why it doesnt work.