Page 1 of 1

Microsoft Wordpad and Unicode

Posted: Tue Jan 08, 2008 1:49 am
by dfhtextpipe
Many users do not fully appreciate that Wordpad fully supports Unicode. Files saved from Wordpad as Unicode text files are stored as UTF16 Little Endian.

Full Unicode support appertains to Wordpad shipped in Windows XP (& Vista). cf. Windows 9x had limited support.

Posted: Tue Jan 08, 2008 7:58 pm
by DataMystic Support
As does NotePad. TextPipe Pro also provides full Unicode support for file conversions -
145 Unicode, ANSI, ASCII and single byte and multi-byte conversion functions - https://www.datamystic.com/textpipe/unicode_conversions -
151 code page conversions - https://www.datamystic.com/textpipe/codepages

Notepad and Unicode

Posted: Wed Jan 09, 2008 7:01 pm
by dfhtextpipe
According to some of the experts on Unicode, Notepad does not provide full support for it. Using Notepad may sometimes give unexpected results.
No matter what encoding you saved a text file with, Notepad will always 'try to guess' when you open the file.

On a WinXP machine, try typing in Notepad the following line:

Code: Select all

Bush hid the facts
then save the file as ANSI type and reopen it with Notepad.

Notes:
  • The above illustrated bug may have been solved in the Notepad shipped within Vista.
    The versions of Notepad supplied with Windows 95, Windows 98 and Windows ME do not support Unicode.
Source: http://www.bridgetonova.com/2007/10/not ... tails.html

Posted: Wed Jan 09, 2008 7:06 pm
by DataMystic Support
Yes - you're right! It opens as garbage.

Windows API call IsTextUnicode()

Posted: Thu Jan 10, 2008 12:21 am
by dfhtextpipe
http://msdn2.microsoft.com/en-us/library/ms776445.aspx
explains this behaviour. Notepad makes this call.

Re: Notepad and Unicode

Posted: Sat Mar 13, 2010 1:31 am
by AnthonyFitz
Yes, this is because all text editors try to guess the text encoding of a file, unless it has a BOM (byte order mark).

They all use different algorithms to calculate this. Also they don't read all of the file, they only read a small portion to use as a test sample and the size of this sample is also different depending on the application. I'm not exactly sure of the average size.

For example, if you have a very very large text file (tens of thousands of lines), with all English text, and at the very bottom have some French text containing accented characters, then save as UTF-8 with no BOM (you don't have this BOM option with notepad, its always on by default). This will save the file as UTF-8, encoding the french accented characters in utf-8 format, with multiple bytes.
If you open this file in any text editor they will think its just ascii and display the text file using the current OS codepage settings, and the french text at the bottom will appear corrupted, unless you force reload the file as UTF-8.

To prevent this incorrect guess work you need to use a BOM.
BOM is optional for UTF-8 files, not sure why, probably because there is only 1 type of utf-8, whereas for Unicode(utf-16) you can have 2 types, Little Endian (LE) or Big Endian (BE). Also note that Windows uses utf-16LE as standard when saving files as Unicode, whereas Mac use utf-16BE, just to be different !
DFH wrote: No matter what encoding you saved a text file with, Notepad will always 'try to guess' when you open the file.
This is not exactly true. If you type in this text and save the file as unicode(utf16-LE) or utf-8 you will not see garbage when you open the file again. The reason for this is because notepad (and all microsoft apps) add BOMs to the file when saving in unicode formats by default.

When saved as UTF-8, the actually encoding of the characters in this example do not change. The file is exactly the same because the text has only english characters. The only change to the file is the addition of a UTF-8 BOM \xEF\xBB\xBF header.
By having this BOM it tells Notepad to read the file as UTF-8, so it knows how to display the characters correctly, and not try to read it as unicode.

In this Bush example above Notepad tries to guess the encoding by scanning the characters and thinks the file is a unicode utf-16LE file. I think this is a pure fluke that the combination of a few of these English character encoding matches up to some UTF-16LE encoded characters. This is a mistake because Notepad should know that Unicode utf-16 files must have a BOM to be valid by the standards, so it should not try to guess Unicode. It should only try to guess if its utf-8 or ascii because BOM in utf-8 is optional by the standards. Other text editors I've used do not have this error.

Hope this adds some light to this strange notepad behavior.

Anthony.