| Age | Commit message (Collapse) | Author |
|
The SearchSysCacheList1 macro was introduced in PostgreSQL 9.
Changed a couple tests to cope with formatting discrepancy
introduced by PostgreSQL 9 (namely, + characters between newlines).
|
|
|
|
|
|
enumLabelToOid merely looked up a single enum entry, while
getEnumLabelOids looks up all of them in bulk.
|
|
Also touched up documentation for FN_EXTRA a bit.
|
|
Requiring decoded escapes to not be 0xFFFE or 0xFFFF is overzealous,
I think. In any case, this isn't even a comprehensive list of the codepoints
considered "invalid".
Also, removed utf8_encode_char helper function, as it was extremely trivial
and used in only one place.
|
|
|
|
Note that this is currently untested with server encodings other than UTF-8.
The encoding policy used is: JSON nodes and most of the JSON functions still
operate in UTF-8. Strings are converted between server encoding and UTF-8
when they go in and out of varlena (text*), and a set of helper functions
are implemented to make these conversions simple to apply.
It is done this way because converting individual codepoints to/from whatever
the server encoding may be is nontrivial (possibly requires a loaded module).
The JSON code needs to encode/decode codepoints when it deals with escapes.
Although a more clever and efficient solution might be to defer charset
conversions to when they're necessary (e.g. round up all the escapes
and encode them all at once), this is not simple, and it's probably not much
more efficient, either. Conversions to/from server encoding and UTF-8
are no-ops when the server encoding is UTF-8, anyway.
|
|
PostgreSQL's pg_wchar.h routines.
* Touched up various functions' documentation.
json_node's are currently encoded in UTF-8, and the JSON module is not
100% compatible with arbitrary server encodings yet. I plan to switch
from UTF-8 to the server encoding pretty soon, after which JSON should be
a well-behaved datatype as far as charsets go.
|
|
* A few various cleanups.
|