Glad to know you got it working, but let me clarify a little more about encodings.
First of all, there is no such thing as "ANSI" encoding: "Ansi" means any Windows encoding. It could be win1232, win1231, whatever, they are all "ANSI". Normally what people refers with ANSI is (and what you get if you use TEncoding.ANSI) is the default encoding in your machine. But if your machine is in Win1232 and the machine that generated the file is in Win1231, then you might have problems. This is why it is important to know the locale of the machine that generated the file. (and if possible, convince them to use utf8...)
Second, you can't really know which encoding the file has if you don't know which encoding was used to create it, so you can't get to know the encoding by opening it in Notepad++. Notepad++ can try to guess the encoding, but it can't really know it.
What we have is:
ASCII: This is a 128 character encoding (from char 0 to 127), and those first 128 characters are the same in UTF8 or any ANSI encoding. So as long as your file doesn't have special characters like a ñ, you can use either UTF9, ASCII or any ANSI encoding and it will work. That's why you can import those other files without any problem: They don't have characters outside the ASCII range.
ANSI: This is a group of encodings whose first 128 characters are the same as in ASCII, but the characters from 128 to 255 are different depending on the locale/country.
So imagine that I export a file containing the text AÑO in my Win1252 locale. As you can see from here:
The A is encoded as 65, the Ñ as 209 (outside the ASCII range from 0 to 127) and the O as 79.
So I send you this file with a 65, 209 and 79 to you.
You get my file, and you decode it as:
209: С (note that this is not a normal "C" but a different character)
So you read AСO instead of AÑO
As said, there is no way to know what is the encoding used to create the file, if you only have the final file. It might be a file that says AСO or AÑO, and you can't know. If you open it in NotePad++ it will likely use the encoding in your machine, so if you are in a cyrillic machine it will show you AСO and if you are in a western machine it will show AÑO (and always show the encoding as ANSI, because both are ANSI encodings)
UTF8 on the other hand is a mutibyte encoding, where again, the first 127 characters are the same as ascii. But after that, you can have multiple bytes to represent the rest of the characters. The nice thing about UTF8 is that it doesn't change with the locale: There is only one UTF8 and it can represent every unicode character. (While there are many ANSI encodings and every on can represents the characters of a group of languages).
So if possible, it is good to tell the people producing CSV to use UTF8. If you can't, you should really know what encoding they used (unless it is all ASCII characters), or the characters from 128 to 255 are going to be wrong. If you can't know which ANSI encoding was used, normally Win1252 is the best choice, since it is used in most western locales. But it is a guess.