Casting?
Thursday, July 31, 2008 at 09:30PM I'm writing some new teaching material at the moment. It is going roughly half as fast as I expected, which is about right in my experience.
Anyhoo, I'm doing casting, where you tell the compiler to convert from one type to another by putting the type in front:
int i, j;
float factor = (float) i / j;
I have to cast i to floating point in the sum, otherwise I get an integer division and no fractional part.
In other words, in the above code if the value of i is 1 and the value of j is 2 I want the value of factor (1/2) to be 0.5 (floating point division) rather than 0 (integer division). I get this result by casting i to float in the sum. C# uses floating point division with floating point operands, and everything comes out OK.
This is a standard computing thing, most languages provide support for casting. And I started to wonder why it is called casting? Popular wisdom seems to be that it is related to casting things in a foundry, where you pour liquid metal into a mould of a particular shape. The shape of the mould determines the result of the cast. So by casting you can change one thing into another.
However, I've thought of another way to look at it. You can think of casting as making a movie. You take an actor (Christian Bale) and cast him as a character (Batman). For the duration of the film the character will behave in terms of the role they have been cast into. This even works when we consider stupid casts. In C# you can't do things like cast a string into an integer. In films you can't do things like cast Christian Bale as City Hall. I quite like this way of looking things, but one thing does worry me. Maybe this is the original meaning, and it has taken me years to figure it out.....
Rob |
6 Comments |
Reader Comments (6)
i = 1
j = 2
print float(i/j)
it will return 0.0. In order to achieve a decimal division, you must convert one (or both) of the elements in float:
print float(i)/j
or
print i/float(j)
or
print float(i)/float(j)
It looks like C# doesn't have this behavior. So when I write (float) 1/2, he makes 1/2 and saves it as being 1.5 and then casts it to a float? What if I do int k = i/j and then (float) k? Does it work?
I hope you understand this question, because casting should depend on the order you use. It shouldn't be possible to convert a int valued 0 to float 1.5
print float(i/j)
- where i and j are integers will perform an integer division, generate an integer result and then make that integer into a float. So 1/2 would end up as 0.0
Hope this helps
However the type of the operands around an operator (whether expression or literal) still determines the context of the operation, which is what drives the type of the result. If that makes sense.