Struct encoding::codec::utf_16::UTF16Encoding [] [src]

pub struct UTF16Encoding<E> {
    // some fields omitted
}

UTF-16 (UCS Transformation Format, 16-bit).

This is a Unicode encoding where one codepoint may use 2 (up to U+FFFF) or 4 bytes (up to U+10FFFF) depending on its value. It uses a "surrogate" mechanism to encode non-BMP codepoints, which are represented as a pair of lower surrogate and upper surrogate characters. In this effect, surrogate characters (U+D800..DFFF) cannot appear alone and cannot be included in a valid Unicode string.

Specialization

This type is specialized with endianness type E, which should be either Little (little endian) or Big (big endian).

Trait Implementations

impl<E: Endian> Encoding for UTF16Encoding<E>

fn name(&self) -> &'static str

fn whatwg_name(&self) -> Option<&'static str>

fn raw_encoder(&self) -> Box<RawEncoder>

fn raw_decoder(&self) -> Box<RawDecoder>

fn encode(&self, input: &str, trap: EncoderTrap) -> Result<Vec<u8>, Cow<'static, str>>

fn encode_to(&self, input: &str, trap: EncoderTrap, ret: &mut ByteWriter) -> Result<(), Cow<'static, str>>

fn decode(&self, input: &[u8], trap: DecoderTrap) -> Result<String, Cow<'static, str>>

fn decode_to(&self, input: &[u8], trap: DecoderTrap, ret: &mut StringWriter) -> Result<(), Cow<'static, str>>

Derived Implementations

impl<E: Copy> Copy for UTF16Encoding<E>

impl<E: Clone> Clone for UTF16Encoding<E>

fn clone(&self) -> UTF16Encoding<E>

fn clone_from(&mut self, source: &Self)