

Take the fictional rules of the universe to their logical conclusion, ad absurdum. Doylist perspective can be found here, here, and in the dictionary definitions of the two terms.) Or as calls it Watsonian, not a Doylist point of view (Further reading on Watsonian vs. Use in-universe knowledge, rules, and common sense to answer the questions. Sign Language Translation (SLT) first uses a Sign Language Recognition (SLR) system to extract sign language glosses from videos.It's like Ask Science, but all questions and answers are written with answers gleaned from the universe itself. Then, a translation system generates spoken language translations from the sign language glosses. Though SLT has gathered interest recently, little study has been performed on the translation system. This paper focuses on the translation system and improves performance by utilizing Transformer networks. We report a wide range of experimental results for various Transformer setups and introduce the use of Spatial-Temporal Multi-Cue (STMC) networks in an end-to-end SLT system with Transformer. We perform experiments on RWTH-PHOENIX-Weather 2014T, a challenging SLT benchmark dataset of German sign language, and ASLG-PC12, a dataset involving American Sign Language (ASL) recently used in gloss-to-text translation. Our methodology improves on the current state-of-the-art by over 5 and 7 points respectively in BLEU-4 score on ground truth glosses and by using an STMC network to predict glosses of the RWTH-PHOENIX-Weather 2014T dataset.

On the ASLG-PC12 corpus, we report an improvement of over 16 points in BLEU-4. Our findings also demonstrate that end-to-end translation on predicted glosses provides even better performance than translation on ground truth glosses.
