Programming for Idiots (C#) - Take 2

05/29/2011 07:52 zTek#106
Your test question from lesson 3;

Would this be a correct way to do it:

int Total = 49;
int Class = 24;
int Eat = 1;
int First = Total - Class;
int Second = First - Eat;
int Main = Total / Second;
Console.Write(Main);
Console.ReadLine();
//Answer = 2
05/29/2011 14:26 _DreadNought_#107
That looks like a way to do it, But remember a int is far far far too big for what your trying to-do, The max-value of an int is: [Only registered and activated users can see links. Click Here To Register...] where as a byte only has a max value of 255, You wont be going over 255, You could even use a sbyte which has a max value of 127, And looking at your calculations you wont be going over 127 either; sbyte would be the fastest and most effective; I don't remember if byte and/or sbyte can be applied to / and - without doing -= and /=.
05/29/2011 16:20 unknownone#108
"Fastest" is nonsense when talking about the CLR, since there's only one instruction for each arithmetic operation (add, sub, mul, div etc), and it works on a word sized integer. That means when a byte is pushed onto the CLR stack, it's being put into a 32/64 bit reg/address which the arithmetic instruction operates on. This might actually be slower, since there could be additional instructions to pad the data. (Eg, MOVZX used instead of MOV on x86). The difference will be negligible anyway, but you can say for certain there is no performance benefit to using bytes.

Using byte rather than int is only particularly important is you're storing arrays of them, to reduce the amount of memory you need to consume. It's pretty pointless to use them as local method variables, or for storage of single bytes (since the CLR will usually pad a structure to a word size boundary anyway).

The same is mostly true for floats and doubles on a 64bit CPU, as every float is emulated as a double anyway.
05/29/2011 19:57 _DreadNought_#109
But when you announce an int;
Code:
int d = byte.MaxValue;
Does the compiler not put the int into the memory? And gives it the required space that an int uses but if your assigning stuff under 127(sbyte) Would it not be best to announce an sbyte and gives it the required memory which is smaller and presumably uses less memory to make the space for a sbyte then an int. Again, Yes, Nothing noticeable but say you used ulong for every single thing in a program, changing them all to the required variable would save about a second upon loading? no?
05/29/2011 21:28 unknownone#110
The point I was trying to make clear is that, even if you use bytes, you are not saving any space - the compiler will treat them as ints for most things. (With the exception of array of bytes, or collections of bytes in structs). When you have a single byte, it will occupy 4 bytes because of the way the compiler aligns data to memory words.

And note there that I'm talking about storage (such as fields in a class). When talking about local variables, they are almost entirely treated as ints for their short lifetime. Size is really not an issue here, since the memory is only occupied for the duration of the method.

I'll give an example to make it more clear.

Code:
byte x = 255;
byte y = 1;
byte z = (byte)(x + y);
The CIL for this would be something like

Code:
ldc.i4 255
stloc.0
ldc.i4 1
stloc.1

ldloc.0
ldloc.1
add
conv.u1
stloc.2
The CLR stack is a fixed width of ints though, so all those ldloc and stloc are moving ints around in memory - not bytes. We can see this by dissassembling it and snipping out the relevant parts.

Code:
mov dword:[ebp-4],0
mov dword:[ebp-8],0
mov dword:[ebp-12],0

mov dword:[ebp-4], 255
mov dword:[ebp-8], 1

mov eax, dword:[ebp-4]
add eax, dword:[ebp-8]
and eax, 0xff
mov dword:[ebp-12], eax
All instructions are using dwords (int). The ASM isn't optimised to use smaller register sizes like AL, as you might've expected. Instead, the conv.u1 instruction is simply converted to (& 0xFF), removing the overfill of a byte.

If you convert the original C# to use int instead of byte - the difference will be the conv.u1 instruction is gone, and the matching and eax, 0xff CPU instruction with it. There's actually no other difference: they are treated equally as ints, and using bytes is doing nothing but adding extra instructions.

Of course, there are meaningful times when it's desirable to use bytes, if you are specifying the intention that something should be a byte. But if you are performing arithmetic on a value, and you want to use bytes purely for the sake of "saving space", you're wasting instructions.
05/30/2011 20:19 _tao4229_#111
^

In any language using "int" or the common size of the registers on your CPU is the fastest (and most compilers will optimize it like that anyways).


Edit: I read sometimes
05/31/2011 12:40 _DreadNought_#112
Aha,

Thanks for clearing that up for me.
05/31/2011 13:08 BaussHacker#113
Quote:
Originally Posted by _tao4229_ View Post
^

In any language using "int" or the common size of the registers on your CPU is the fastest (and most compilers will optimize it like that anyways).


Edit: I read sometimes
Isn't an integer default in C#?
05/31/2011 17:28 unknownone#114
Quote:
Originally Posted by _tao4229_ View Post
In any language using "int" or the common size of the registers on your CPU is the fastest (and most compilers will optimize it like that anyways).
That's an over-generalized assumption and not particularly true. Native code compilers won't always widen integers, because there are far more important optimisations like register allocation - where you wanna fit as many values into your limited registers as possible (because reading from memory is slow). On x86, it's no slower to use ADD AL than it is to use ADD EAX, but it is slower to use MOVZX EAX than to use MOV EAX.

On other architectures like ARM though, you can't necessarily access parts of registers (and it wouldn't make sense anyway, since you can switch the endianness of them), then widening to ints is often the simplest and quickest way to do things.

Those are implementation specific optimisations made by compilers anyway. The point I was trying to convey is that the CLI standard explicitly defines that widening occurs, and that all implementations of it should do so. It's done for simplication and portability of the runtime, but not necessarily optimisation (and quite clearly, it limits how much optimisation can be done).

Quote:
Originally Posted by BaussHacker
Isn't an integer default in C#?
Quote:
Originally Posted by ECMA-335$12.1.2
Loading from 1- or 2-byte locations (arguments, locals, fields, statics, pointers) expands to 4-byte
values. For locations with a known type (e.g., local variables) the type being accessed determines
whether the load sign-extends (signed locations) or zero-extends (unsigned locations).
05/31/2011 18:37 Lateralus#115
Quote:
Originally Posted by _tao4229_ View Post
^

In any language using "int" or the common size of the registers on your CPU is the fastest (and most compilers will optimize it like that anyways).


Edit: I read sometimes
Not true. In MIPS:

to load a 32-bit value in a register:
lui $s0, 0x1234
ori $s0, $s0, 0x5678

to load a 16-bit value in a register:
ori $s0, $0, 0x1234
06/24/2011 10:47 maximelegran#116
can i make a pixel bot with that???
06/25/2011 00:10 [GM]#117
Quote:
Originally Posted by InfamousNoone View Post
Lesson Two - Classes and the Static-modifier: [Only registered and activated users can see links. Click Here To Register...]
the link is not working
06/28/2011 02:14 BioHazarxPaul#118
I liked the videos he did more..
06/28/2011 18:44 _DreadNought_#119
^^
07/29/2011 04:29 BrandonCalsyn#120
I find this a million times easyer than your videos thanks so much!!! ill be on my way to a private server in no time :)