I was wondering why the string 'foo', when represented as a Buffer, but with different encodings was different?
Buffer.from('foo', 'utf-8') /* <Buffer 66 6f 6f> */
Buffer.from('foo', 'ascii') /* <Buffer 66 6f 6f> */
Buffer.from('foo', 'base64') /* <Buffer 7e 8a> */
Buffer.from('foo', 'utf16le') /* <Buffer 66 00 6f 00 6f 00> */
I probably don't understand buffers enough. Here's what I know about buffers:
A buffer is an area of memory.
It represents a fixed-size chunk of memory (can't be resized)
You can think of a buffer like an array of integers, which each represent a byte of data.
The way I understand it (in a very simplistic way), I know that we can only store the string foo as binary, and the character encoding is the way different kinds of data can be converted from whatever format into binary.
My question now is, why does the character encoding change the result of the buffer?
Bufferuses the encoding in the process of creating a buffer?var utf8 = { ..., 'f': 0x66, ..., 'o': 0x6f, ...}then it allocates some memory and writes the bytes one after another into that memory. So 66 for the f and twice 6f for the o, then you get 0x666f6f, which is 11001100110111101101111 binary. This is very simple explanation and in reality it's more complex but i'm sure there's no magic.