Question: The program inputs into a function a int type and then typecasts this number to various sizes and unsigned types. Explain the output for the
The program inputs into a function a int type and then typecasts this number to various sizes and unsigned types. Explain the output for the following values: 1000, -1000, 100, -100. Think about unsigned/signed encoding and overflow in your answer. Why do some numbers end up negative when the value was positive? Why do some small numbers end up with a huge magnitude when typecast to unsigned? Put your explanation into the comments inside the file.
/* PUT YOUR NAME AND ANSWERS HERE * * */
#include
void typecast(int a) { unsigned b = (unsigned) a; unsigned char c = (unsigned char) a; char d = (char) a; unsigned long e = (unsigned long) a; long f = (long) a; // this illustrates some printf(" %d : ",a); printf(" ->unsigned int: %u ",b); printf(" ->unsigned char (as decimal) %u ",c); printf(" ->char (as decimal) %d ",d); printf(" ->unsigned long: %lu ",e); printf(" ->long: %ld ",f); }
int main() { int num=0; printf("Enter an integer: "); scanf("%d", &num); typecast(num); return 0; }
Step by Step Solution
There are 3 Steps involved in it
Get step-by-step solutions from verified subject matter experts
