Weeks 6 & 7
There are obvious differences when communicating with humans vs. computers. I’d say the main difference is understanding vs decoding. Machine translation has yet to focus on language learning but takes a decoding approach. I imagine this is akin to deciphering all the words of a sentence in a foreign language, without grasping the whole meaning. Anyone who has learned a foreign language knows what I’m talking about. The article I linked to above by one of my favorite thinkers, Douglas Hofstadter, explains this topic in more detail and more eloquently than I ever could. Revolving around Google, a tool that everyone loves to hate yet uses in secret, Hofstadter explains that:
The bai-lingual engine isn’t reading anything — not in the normal human sense of the verb “to read.” It’s processing text. The symbols it’s processing are disconnected from experiences in the world. It has no memories on which to draw, no imagery, no understanding, no meaning residing behind the words it so rapidly flings around.
Essentially, the only way to get translation enginge that would do justice to Eugene Onegin, or musicals like Hamilton, would be for them to implement a strategy seeking to understand rather than decode. And yes, that would mean being filled with emotions. But for now, machines are not filled with emotions or memories and have their own hard-coded language that dictates how they behave. So what Google Translate is doing when you wish “I love mochi” to be instantaneously rendered in Japanese is that the engine encodes stacks into its own computer language (which I am not going to attempt to reproduce), then (hopefully) decodes it into the Japanese, もちが好きです. It sounds like hiring a very intelligent linguist to translate a language foreign to them, into another foreign language, without the possibility of dialing a friend for help.
I’m a far cry from knowing the ins and outs of machine learning, a beast we don’t touch in my boot camp, but learning programming languages and trying to communicate on their level — writing efficient code so that they work less, translating my ideas into their language — has piqued my interest on how they work with our languages. I’ve only gleaned surface information, but with my tie to linguistics, and now programming languages, I feel some sort of obligation to learn more about Natural Language Processing. Then I look into introductory courses that strongly bring to mind syntax trees and other technical aspects of linguistics that never caught my fancy. For now, I’ll stick to developing aesthetically pleasing data visualizations, maybe with a future goal to re-vamp the dry diagrams that uninspired me as a college student.