Table of Contents
1. Gabatarwa
Rarraba kalma (Tokenization) shine matakin farko na sarrafa bayanai kafin aiki don kusan dukkan ayyukan Sarrafa Harshe na Halitta (NLP), tun daga nazarin tunani (sentiment analysis) har zuwa fassarar na'ura (machine translation). Samfuran zamani kamar BERT, GPT-3, da XLNet sun dogara ne akan rarraba kalma zuwa sassa (subword tokenization), musamman algorithm kamar WordPiece, don raba rubutu zuwa raka'a masu ma'ana yayin sarrafa kalmomin da ba a cikin ƙamus ba. Wannan takarda tana magance wata matsala mai mahimmanci: ingantaccen aikin algorithm na rarraba kalma na WordPiece da kansa.
Marubutan sun gano cewa daidaitaccen dabarar son kai na dogon wasa-farko (greedy longest-match-first - MaxMatch) da ake amfani da ita a cikin WordPiece yana da ƙarancin lokaci mai kyau—$O(n^2)$ ko $O(nk)$, inda $n$ shine tsawon shigarwa kuma $k$ shine matsakaicin tsawon kalma (token). Sun ba da shawarar LinMaxMatch, wani sabon algorithm wanda ya cimma madaidaicin $O(n)$ lokaci-layin (linear-time) don rarraba kalma ɗaya, da kuma haɗaɗɗen algorithm na Rarraba Kalma Mai Cikakken Tsari (End-to-End - E2E WordPiece) don rubutu na gabaɗaya.
2. Bayan Fage & Ayyukan Da Suka Gabata
Rarraba kalma na WordPiece, kamar yadda ake amfani da shi a cikin BERT, ya ƙunshi matakai biyu: (1) raba rubutu zuwa kalmomi bisa sarari da alamun rubutu (pre-tokenizing), sannan (2) aiwatar da algorithm na MaxMatch akan kowace kalma tare da ƙamus mai ƙayyadadden ƙamus. MaxMatch, wanda kuma ake amfani da shi a rarraba kalmar Sinanci (Chinese word segmentation), yana maimaita zaɓin mafi tsayin prefix na sauran kirtani (string) wanda ya dace da kalma (token) daga cikin ƙamus.
Algorithm masu inganci da suka gabata sun iyakance ga $O(n^2)$ ko $O(nk)$. Abubuwan $k$ suna da matsala saboda ƙamus na iya ƙunsar dogayen kalmomi (misali, "supercalifragilisticexpialidocious"). Gudunmawar wannan takarda ta ta'allaka ne a kawar da wannan abu mai dogaro da ƙamus gaba ɗaya.
3. Algorithm na LinMaxMatch
3.1 Babban Ra'ayi & Wahayi
LinMaxMatch an yi shi ne bisa wahayin algorithm na Aho-Corasick don daidaita kirtani (string matching). Babban fahimta shine a ɗauki ƙamus a matsayin saitin tsari (patterns) kuma kalmar shigarwa a matsayin rubutu. Ta hanyar ƙididdige ƙarin tsarin bayanai akan bishiyar ƙamus (trie), algorithm zai iya guje wa komawa baya (backtracking) yayin daidaitawa, wanda shine tushen rikitarwa mai murabba'i (quadratic complexity) a cikin MaxMatch na asali.
3.2 Tsarin Bayanai: Trie, Hanyoyin Kasa (Failure Links), da Fitar Kasa (Failure Pops)
Algorithm yana amfani da manyan tsarin bayanai guda uku waɗanda aka gina daga ƙamus yayin lokacin sarrafa bayanai na waje (offline preprocessing phase):
- Trie: Bishiyar prefix na yau da kullun inda kowane kumburi (node) yana wakiltar harafi kuma hanyoyi daga tushe suna wakiltar kalmomin ƙamus (tokens).
- Hanyar Kasa (Failure Link - kamar na Aho-Corasick): Ga kumburi mai wakiltar kirtani $s$, hanyar kasarsa (failure link) tana nuna zuwa kumburin da ke wakiltar mafi tsayin suffix na ainihin $s$ wanda kuma prefix ne na wasu kalmomin ƙamus. Wannan yana ba algorithm damar "kasa aiki" cikin inganci.
- Fitar Kasa (Failure Pop): Sabon ƙari. Lokacin tafiya daga kumburi ta hanyar kasarsa (failure link), algorithm na iya "fitar" ɗaya ko fiye da kalmomin da aka daidaita suna ƙarewa a wannan kumburi. An riga an ƙididdige wannan jeri kuma an adana shi.
3.3 Tafiya ta Algorithm
Don kalmar shigarwa mai tsayi $n$, algorithm yana sarrafa haruffa daga hagu zuwa dama a cikin tafiya ɗaya. Yana kiyaye kumburi na yanzu a cikin trie. Ga kowane sabon harafi:
- Yi ƙoƙarin bin gefen daidaitawa daga kumburi na yanzu.
- Idan ya yi nasara, matsar zuwa kumburin yaro. Idan wannan kumburi shine ƙarshen kalmar ƙamus (token endpoint), rubuta shi a matsayin daidaitawar da za a iya samu.
- Idan babu gefen daidaitawa da ya wanzu, bi hanyar kasawa (failure link) na kumburi na yanzu. Tattara duk wani kalmomi da aka adana a cikin jerin "fitar kasa" (failure pop) da ke da alaƙa. Sannan, sake gwada daidaita harafin shigarwa iri ɗaya daga sabon kumburi.
Wannan tsari yana tabbatar da cewa ana sarrafa kowane harafi sau ɗaya kawai, yana ba da garantin lokacin $O(n)$.
4. Rarraba Kalma (WordPiece) Mai Cikakken Tsari (End-to-End - E2E)
Don rarraba jimloli ko takardu cikakke, marubutan sun ba da shawarar haɗa matakin raba kalma na farko (splitting on whitespace/punctuation) tare da LinMaxMatch zuwa algorithm ɗaya, haɗaɗɗe. Wannan algorithm na E2E WordPiece kuma yana aiki a cikin lokaci-layin dangane da jimillar tsayin rubutu, yana guje wa nauyin matakai daban-daban. Yana da wayo yana haɗa gano iyaka tare da daidaita sassan kalma (subword matching) a cikin tsarin tafiya na trie iri ɗaya.
5. Sakamakon Gwaji & Aiki
Takardar ta gabatar da ma'auni masu gamsarwa suna kwatanta aiwatar da su da manyan ɗakunan karatu guda biyu da ake amfani da su: HuggingFace Tokenizers da TensorFlow Text.
Kwatancen Aiki
Matsakaicin Gudun Gaggawa:
- 8.2x mafi sauri fiye da HuggingFace Tokenizers.
- 5.1x mafi sauri fiye da TensorFlow Text.
An gwada akan ayyukan rarraba kalma na gabaɗaya.
Sakamakon ya nuna cewa fa'idar ka'idar $O(n)$ tana fassara zuwa gagarumin ribar aiki a duniyar gaske, musamman ga dogon rubutu ko a cikin mahalli masu saurin jinkiri kamar ƙaddamarwa ta wayar hannu (mobile inference).
6. Muhimman Bayanai & Bincike
Babban Fahimta
Babban nasarar takardar ba kawai mai rarraba kalma mafi sauri ba ne; shine sanin rarraba kalma a matsayin matsalar daidaita kirtani (string matching problem) wanda za a iya warware shi ta hanyar ka'idar atomatik ta gargajiya (classic automata theory). Ta hanyar sake fasalin MaxMatch na WordPiece daga bincike na son kai na asali (naive greedy search) zuwa bambancin Aho-Corasick, marubutan sun haɗa tazarar shekaru da yawa tsakanin ilimin kimiyyar kwamfuta na ka'ida da aikin NLP. Wannan yana tunatar da yadda takardar CycleGAN ta sake fasalin fassarar hoto a matsayin matsala na koyan adawa mai daidaitaccen zagaye (cycle-consistent adversarial learning problem), ƙirƙirar wani sabon tsari.
Kwararar Hankali
Hujjar tana da hankali sosai: 1) Gano toshewar hanya (bottleneck) ($O(n^2)$/$O(nk)$ rikitarwa). 2) Zana wahayi daga ingantaccen algorithm da aka tabbatar (Aho-Corasick). 3) Daidaita shi zuwa takamaiman ƙayyadaddun MaxMatch (buƙatar fitar da duk kalmomin da aka daidaita, ba kawai wanzuwa ba). 4) Gabatar da sabbin tsarin bayanai (Failure Pops) don sarrafa waɗannan ƙayyadaddun. 5) Tsawaita mafita zuwa cikakken bututun (E2E). Kwararar daga matsala zuwa wahayi zuwa daidaitawa zuwa gabaɗaya shine bincike mai kyau da aka yi amfani da shi.
Ƙarfi & Kurakurai
Ƙarfi: Garanti na $O(n)$ yana da kyau a ka'ida kuma an tabbatar da shi a aikace. Farashin sarrafa bayanai kafin aiki (preprocessing) shine nauyi na lokaci ɗaya. Tasirin akan NLP na wayar hannu/gefe (mobile/edge) na iya zama mai girma, kamar yadda marubutan suka lura kuma aka goyi bayan ta hanyar yanayin binciken AI akan na'ura daga cibiyoyi kamar Google AI.
Kurakurai/Batutuwan da aka Bari: Takardar ta taɓa amma bai yi zurfin bincike ba game da nauyin ƙwaƙwalwar ajiya don ingantaccen trie (hanyoyin kasa & fitar kasa). Ga manyan ƙamus (misali, samfuran harsuna da yawa), wannan na iya zama ba ƙarami ba. Bugu da ƙari, ma'auni, ko da yake yana da ban sha'awa, yana kan takamaiman aiwatarwa. Kwatancen da sauran bambance-bambancen MaxMatch masu inganci (misali, ta amfani da jerin suffix - suffix arrays) zai ƙarfafa da'awar.
Bayanai Masu Aiki
Ga masu aikin NLP: Nan da nan yi bayanin jinkirin rarraba kalmarku. Idan ya fi 1-2% na jimillar lokacin ƙaddamarwarku, musamman don aikace-aikacen yawo ko na wayar hannu, ɗaukar algorithm kamar LinMaxMatch shine ingantaccen ingantaccen dawowa kan jari (high-ROI optimization). Ga masu bincike: Wannan aiki ya buɗe kofa. Shin irin waɗannan hanyoyin da suka dogara da atomatik (automata-based) za su iya inganta Haɗin Kashi-Biyu (Byte-Pair Encoding - BPE) ko SentencePiece? Babban ra'ayi—maye gurbin komawa baya (backtracking) tare da canje-canjen jihar da aka riga aka ƙididdige—yana da amfani ga sauran algorithm na rarraba kalma na son kai (greedy segmentation) a cikin NLP.
7. Cikakkun Bayanai na Fasaha & Tsarin Lissafi
Za a iya tsara ainihin LinMaxMatch. Bari:
- $V$ = Ƙamus, saitin kirtani.
- $T$ = Trie da aka gina daga $V$.
- Ga kumburin trie $u$, bari $L(u)$ ya zama kirtanin da aka kafa daga haruffa daga tushe zuwa $u$.
- Aikin Kasa $f(u)$ (Failure Function): Kumburi $v$ wanda $L(v)$ shine mafi tsayin suffix na ainihin $L(u)$ inda $v$ kuma kumburi ne a cikin $T$.
- Fitar Kasa $P(u)$ (Failure Pop): Saitin kalmomin ƙamus $t \in V$ inda $t$ suffix ne na $L(u)$ kuma akwai kumburi $w$ akan hanyar daga tushe zuwa $f(u)$ (ko $u$ da kansa) inda $L(w) = t$. An riga an ƙididdige wannan don dukkan kumburi.
Rarraba kalma na kirtanin shigarwa $S[1..n]$ yana ci gaba ta hanyar kiyaye kumburi na yanzu $u$ (da farko tushe) da ma'auni $i$. Yayin da $i \leq n$:
- Idan akwai yaro $v$ na $u$ tare da alamar gefe $S[i]$, saita $u = v$, $i = i+1$. Idan $u$ ya nuna ƙarshen kalmar ƙamus (token), rubuta shi.
- In ba haka ba, fitar da duk kalmomi a cikin $P(u)$. Saita $u = f(u)$. Kara ƙara $i$. Idan $u$ shine tushe kuma babu yaro ya dace da $S[i]$, to $i = i+1$ (sarrafa haruffan da ba a sani ba).
Rikitarwar $O(n)$ na wannan madauki yana bayyana, kamar yadda $i$ ko dai yana ƙaruwa ko $u$ yana bin hanyoyin kasa (failure links), wanda za a iya daidaita shi zuwa akai-akai a kowane mataki.
8. Tsarin Bincike: Nazarin Lamari
Yanayi: Inganta rarraba kalma don mataimakin murya na ainihi (real-time) akan wayar hannu. Bututun na yanzu yana amfani da daidaitaccen mai rarraba kalma na WordPiece daga ɗakin karatu da ya shahara.
Aiwatar da Tsarin:
- Ma'auni na Asali (Baseline Measurement): Bayanin bayanai (profile) ya nuna rarraba kalma yana cinye ~15ms a kowace tambaya, wanda shine 12% na jimillar jinkirin bututun (125ms). Manufar ita ce <100ms.
- Binciken Toshewar Hanya (Bottleneck Analysis): Tambayoyin sau da yawa suna da tsayi (jimlolin da aka faɗa). Abubuwan $O(nk)$ suna da mahimmanci saboda ƙamus yana ƙunsar kalmomin haɗin fasaha.
- Ƙirar Mafita (Solution Design): Maye gurbin mai rarraba kalma da LinMaxMatch. Farashin lokaci ɗaya na gina ingantaccen trie tare da hanyoyin kasa/fitar kasa (failure links/pops) yana karɓuwa yayin ƙaddamar da samfuri.
- Sakamakon Da Ake Tsammani (Expected Outcome): Dangane da gudun gaggawar takardar na 5-8x, lokacin rarraba kalma yakamata ya ragu zuwa ~2-3ms, yana rage jimillar jinkiri zuwa ~112ms, wani mataki mai ƙarfi zuwa manufar. Garanti na lokaci akai-akai kuma yana inganta hasashen jinkiri.
Wannan nazarin lamari yana kwatanta yadda kaddarorin ka'idar algorithm ke magance takamaiman ƙayyadaddun aiki a cikin tsarin AI na zamani kai tsaye.
9. Hasashen Aikace-aikace & Hanyoyin Gaba
Tasirin ingantaccen rarraba kalma na lokaci-layin ya wuce BERT:
- AI akan Na'ura & Geje (On-Device & Edge AI): Yayin da samfura kamar tsarin GPT aka tace su don wayoyin hannu, kowane millisecond da aka ajiye a cikin sarrafa bayanai kafin aiki yana haɗawa. Wannan aikin shine tushe ga tsara na gaba na aikace-aikacen NLP masu amsawa, masu iya aiki ba tare da intanet ba (offline-capable).
- Rarraba Kalma Mai Yawo (Streaming Tokenization): Don fassarar ainihi ko rubuta taken kai tsaye, rikitarwar $O(n)$ tana tabbatar da ingantaccen aiki da za a iya hasasawa. Aikin gaba zai iya bincika rarraba kalma mai ƙara (incremental tokenization) don rafukan marasa iyaka.
- Harsuna Da Yawa & Manyan Ƙamus (Multilingual & Large Vocabularies): Haɓaka zuwa ƙamus tare da kalmomi (tokens) 1M+ a cikin harsuna 100+. Ana buƙatar bincike don inganta girman ƙwaƙwalwar ajiya na tsarin bayanai da aka riga aka ƙididdige, watakila ta amfani da matsi trie ko hanyoyin kusanta.
- Haɗawa da Tsarin Samfuri (Integration with Model Architecture): Shin za a iya koyon rarraba kalma ko haɗa shi da farkon Layer na Transformer? "Rarraba Kalma Mai Sauƙi (Dynamic Tokenization)" dangane da mahallin ya kasance ƙalubale mai buɗe ido, amma ingantattun algorithm na tushe kamar wannan shine abin da ake buƙata don irin wannan bincike.
10. Nassoshi
- Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. NAACL-HLT.
- Aho, A. V., & Corasick, M. J. (1975). Efficient string matching: an aid to bibliographic search. Communications of the ACM.
- Kudo, T. (2018). Subword Regularization: Improving Neural Network Translation Models with Multiple Subword Candidates. ACL.
- Wolf, T., et al. (2020). HuggingFace's Transformers: State-of-the-art Natural Language Processing. arXiv preprint arXiv:1910.03771.
- Google. (2020). TensorFlow Text. https://www.tensorflow.org/text
- Brown, T. B., et al. (2020). Language Models are Few-Shot Learners. NeurIPS.
- Zhu, J.-Y., Park, T., Isola, P., & Efros, A. A. (2017). Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks. ICCV. (An ambata a matsayin misali na sake fasalin canjin tsari).