Can boost::tokenizer tokenize 2byte character string?

18 Sep
2004
18 Sep
'04
4:39 a.m.
Hi. I try to use 'boost::tokenizer<boost::char_separator<char> >' to separate 2byte character string like Korean, Japanese, or Chinese. But, I found that it does not works correctly. Is there a solution? Thanks for the help, Lee Joo-Young
7592
Age (days ago)
7592
Last active (days ago)
3 comments
2 participants
participants (2)
-
Lee, Joo-Young