TITLE: signedness of char (Newsgroups: comp.std.c++, 8 Jan 97) MIKAEL: d96-mst@nada.kth.se (Mikael Ståldal) >The signess of plain char is not defined in the standard. What are the >resons for not mandating it to be unsigned? CLAMAGE: Steve Clamage, stephen.clamage@eng.sun.com, 8 Jan 97 From the beginning in C, the signedness of 'char' was left up to the implementation. The reason was that extending a char to an int ought to be an efficient operation, and computers varied (and still vary) in whether unsigned or signed extension was more efficient. On some machines it makes no difference. On others, the "wrong" kind of extension takes 3 instructions. When you consider that C extends chars to ints all over the place, particularly for most standard library functions, that is an important consideration. Although some implementations of C allowed it earlier, Standard C allows you to specify a character type as specifically signed or unsigned. Digression: IMHO, the whole thing was ill-considered. A 'char' ought to be a character in the character set, and not a "tiny integer". If you want the language to have a "tiny integer" or a "byte" type, 'char' should not be overloaded for those purposes. If that principle had been adopted in C (as it was in Pascal 8 years before K&R1), we would not need to have these interminable discussions about behavior and implementation of type char. End of digression. If C++ mandated a signedness or implementation for type char, it would break C compatibility. In addition, it would require an inefficient implementation of char on some systems. IMHO, this is one of many features of C++ that must remain suboptimal (or broken) in C++ because compatibility is considered more important than abstract (or concrete) notions of good language design.