MediaWiki API result
This is the HTML representation of the JSON format. HTML is good for debugging, but is unsuitable for application use.
Specify the format parameter to change the output format. To see the non-HTML representation of the JSON format, set format=json.
See the complete documentation, or the API help for more information.
{
"compare": {
"fromid": 1,
"fromrevid": 1,
"fromns": 0,
"fromtitle": "Main Page",
"toid": 2,
"torevid": 2,
"tons": 0,
"totitle": "Large Language Model",
"*": "<tr><td colspan=\"2\" class=\"diff-lineno\" id=\"mw-diff-left-l1\">Line 1:</td>\n<td colspan=\"2\" class=\"diff-lineno\">Line 1:</td></tr>\n<tr><td class=\"diff-marker\" data-marker=\"\u2212\"></td><td class=\"diff-deletedline diff-side-deleted\"><div><del class=\"diffchange diffchange-inline\"><strong>MediaWiki has been installed</del>.<del class=\"diffchange diffchange-inline\"></strong></del></div></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">{{Short description|Type of artificial neural network for natural language processing}}</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">{{Infobox artificial intelligence</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">| name\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = Large language model</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">| image\u00a0 \u00a0 \u00a0 \u00a0 = [[File:Transformer model architecture</ins>.<ins class=\"diffchange diffchange-inline\">svg|250px]]</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">| caption\u00a0 \u00a0 \u00a0 = The transformer architecture, the foundation of most modern large language models</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">| invented_by\u00a0 = [[Vaswani et al.]] (Google Brain, 2017)</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">| latest_release_version = Various (e.g., GPT-4o, Claude 3.5, Grok 3, Llama 4, Gemini 2)</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">| latest_release_date\u00a0 \u00a0 = 2024\u20132026</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">| genre\u00a0 \u00a0 \u00a0 \u00a0 = [[Natural language processing]], [[Generative artificial intelligence]]</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">| license\u00a0 \u00a0 \u00a0 = Varies (proprietary or open-source)</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">}}</ins></div></td></tr>\n<tr><td class=\"diff-marker\"></td><td class=\"diff-context diff-side-deleted\"><br></td><td class=\"diff-marker\"></td><td class=\"diff-context diff-side-added\"><br></td></tr>\n<tr><td class=\"diff-marker\" data-marker=\"\u2212\"></td><td class=\"diff-deletedline diff-side-deleted\"><div><del class=\"diffchange diffchange-inline\">Consult the </del>[<del class=\"diffchange diffchange-inline\">https://www</del>.<del class=\"diffchange diffchange-inline\">mediawiki.org/wiki/Special:MyLanguage/Help:Contents User's Guide</del>] <del class=\"diffchange diffchange-inline\">for information on using the wiki software</del>.</div></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">A '''large language model''' ('''LLM''') is a type of </ins>[<ins class=\"diffchange diffchange-inline\">[artificial neural network]] trained on vast amounts of text data to understand, generate, and manipulate human language</ins>. <ins class=\"diffchange diffchange-inline\">LLMs are a core technology behind modern [[generative artificial intelligence]] systems such as [[ChatGPT]], [[Claude (chatbot)|Claude]], [[Grok (chatbot)|Grok]], and [[Gemini (chatbot)|Gemini]</ins>].</div></td></tr>\n<tr><td class=\"diff-marker\"></td><td class=\"diff-context diff-side-deleted\"><br></td><td class=\"diff-marker\"></td><td class=\"diff-context diff-side-added\"><br></td></tr>\n<tr><td class=\"diff-marker\" data-marker=\"\u2212\"></td><td class=\"diff-deletedline diff-side-deleted\"><div>== <del class=\"diffchange diffchange-inline\">Getting started </del>==</div></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div>== <ins class=\"diffchange diffchange-inline\">History </ins>==</div></td></tr>\n<tr><td class=\"diff-marker\" data-marker=\"\u2212\"></td><td class=\"diff-deletedline diff-side-deleted\"><div>* [https://<del class=\"diffchange diffchange-inline\">www.mediawiki</del>.org/<del class=\"diffchange diffchange-inline\">wiki</del>/<del class=\"diffchange diffchange-inline\">Special:MyLanguage</del>/<del class=\"diffchange diffchange-inline\">Manual</del>:<del class=\"diffchange diffchange-inline\">Configuration_settings Configuration settings list</del>]</div></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div>\u00a0</div></td></tr>\n<tr><td class=\"diff-marker\" data-marker=\"\u2212\"></td><td class=\"diff-deletedline diff-side-deleted\"><div>* [<del class=\"diffchange diffchange-inline\">https</del>:/<del class=\"diffchange diffchange-inline\">/www</del>.<del class=\"diffchange diffchange-inline\">mediawiki</del>.<del class=\"diffchange diffchange-inline\">org/wiki/Special</del>:<del class=\"diffchange diffchange-inline\">MyLanguage/Manual</del>:<del class=\"diffchange diffchange-inline\">FAQ MediaWiki FAQ]</del></div></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">The foundations of large language models trace back to early statistical language models and [[recurrent neural networks]] (RNNs). Key milestones include:</ins></div></td></tr>\n<tr><td class=\"diff-marker\" data-marker=\"\u2212\"></td><td class=\"diff-deletedline diff-side-deleted\"><div>* <del class=\"diffchange diffchange-inline\">[https</del>:<del class=\"diffchange diffchange-inline\">//lists</del>.<del class=\"diffchange diffchange-inline\">wikimedia</del>.<del class=\"diffchange diffchange-inline\">org/postorius/lists/mediawiki</del>-<del class=\"diffchange diffchange-inline\">announce</del>.<del class=\"diffchange diffchange-inline\">lists</del>.<del class=\"diffchange diffchange-inline\">wikimedia</del>.<del class=\"diffchange diffchange-inline\">org</del>/ <del class=\"diffchange diffchange-inline\">MediaWiki release mailing list</del>]</div></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div>\u00a0</div></td></tr>\n<tr><td class=\"diff-marker\" data-marker=\"\u2212\"></td><td class=\"diff-deletedline diff-side-deleted\"><div>* [https://<del class=\"diffchange diffchange-inline\">www.mediawiki</del>.org/<del class=\"diffchange diffchange-inline\">wiki</del>/<del class=\"diffchange diffchange-inline\">Special:MyLanguage/Localisation#Translation_resources Localise MediaWiki for your language</del>]</div></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div>* <ins class=\"diffchange diffchange-inline\">'''2017''': The seminal paper </ins>[<ins class=\"diffchange diffchange-inline\">\"Attention Is All You Need\"](</ins>https://<ins class=\"diffchange diffchange-inline\">arxiv</ins>.org/<ins class=\"diffchange diffchange-inline\">abs</ins>/<ins class=\"diffchange diffchange-inline\">1706.03762) by [[Ashish Vaswani]] and colleagues at Google introduced the '''[[transformer (machine learning model)|transformer]]''' architecture, which replaced recurrent layers with self-attention mechanisms, enabling much better parallelization and scaling.<ref name=\"transformer\">{{cite journal |last1=Vaswani |first1=Ashish |title=Attention Is All You Need |journal=Advances in Neural Information Processing Systems |date=2017}}<</ins>/<ins class=\"diffchange diffchange-inline\">ref></ins></div></td></tr>\n<tr><td class=\"diff-marker\" data-marker=\"\u2212\"></td><td class=\"diff-deletedline diff-side-deleted\"><div>* [https://<del class=\"diffchange diffchange-inline\">www</del>.<del class=\"diffchange diffchange-inline\">mediawiki.org</del>/<del class=\"diffchange diffchange-inline\">wiki</del>/<del class=\"diffchange diffchange-inline\">Special</del>:<del class=\"diffchange diffchange-inline\">MyLanguage/Manual</del>:<del class=\"diffchange diffchange-inline\">Combating_spam Learn how to combat spam on your wiki</del>]</div></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div>\u00a0</div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">* '''2018'''</ins>: <ins class=\"diffchange diffchange-inline\">[[OpenAI]</ins>] <ins class=\"diffchange diffchange-inline\">released [[GPT (language model)|GPT-1]], followed by GPT-2 in 2019, demonstrating the power of scaling up transformer-based models.</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div>\u00a0</div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div>* <ins class=\"diffchange diffchange-inline\">'''2020''': [</ins>[<ins class=\"diffchange diffchange-inline\">GPT-3]] with 175 billion parameters showed emergent abilities such as few-shot learning, sparking widespread public interest.</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div>\u00a0</div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">* '''2022\u20132023''': The release of [[ChatGPT]] (based on GPT-3.5 and later GPT-4) brought LLMs into mainstream use. Open-source models like [[Meta]]'s [[Llama]] series and [[Mistral AI]] models democratized access.</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div>\u00a0</div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">* '''2024\u20132026''': Continued scaling with multimodal models (text + image + audio), longer context windows (millions of tokens), and reasoning-focused architectures.</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div>\u00a0</div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">== Architecture ==</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div>\u00a0</div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">Most modern LLMs are based on the '''decoder-only transformer''' architecture:</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div>\u00a0</div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">* '''Self-attention''' mechanism that allows the model to weigh the importance of different words in a sequence.</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">* '''Feed-forward neural networks''' applied at each position.</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">* '''Layer normalization''' and '''residual connections''' for stable training.</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">* '''Positional encoding''' (or rotary embeddings like RoPE) to handle sequence order.</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div>\u00a0</div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">Key variants include:</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">* Encoder-decoder (e.g., original T5, BART)</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">* Decoder-only (most popular for generative tasks: GPT, Llama, Grok, Mistral)</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">* Mixture-of-Experts (MoE) architectures (e.g., Mixtral, Grok-1) that activate only a subset of parameters per token for efficiency.</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div>\u00a0</div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">== Training ==</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div>\u00a0</div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">LLMs undergo two main training phases:</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div>\u00a0</div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">=== Pre-training ===</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">* '''Objective''': Next-token prediction (causal language modeling) or masked language modeling.</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">* '''Data''': Trillions of tokens from web crawls (Common Crawl), books, Wikipedia, code repositories, scientific papers, and more.</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">* '''Compute'''</ins>: <ins class=\"diffchange diffchange-inline\">Trained on thousands of GPUs</ins>/<ins class=\"diffchange diffchange-inline\">TPUs for weeks or months using massive distributed training frameworks.</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div>\u00a0</div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">=== Post-training (alignment) ===</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">* '''Supervised fine-tuning''' (SFT) on high-quality instruction datasets</ins>.</div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">* '''Reinforcement Learning from Human Feedback''' (RLHF) or alternatives like Direct Preference Optimization (DPO) to make outputs more helpful, honest, and harmless</ins>.</div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div>\u00a0</div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">== Capabilities ==</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div>\u00a0</div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">Large language models can perform a wide range of tasks</ins>:</div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">* Text generation, summarization, translation, and rewriting</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">* Question answering and knowledge retrieval</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">* Code generation and debugging</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">* Mathematical reasoning (improved in recent models)</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">* Creative writing, role-playing, and conversation</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">* Multimodal understanding (in models like GPT-4o, Gemini, Claude 3)</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div>\u00a0</div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">'''Emergent abilities''' appear as models scale</ins>: <ins class=\"diffchange diffchange-inline\">abilities not explicitly trained for but that arise at certain parameter thresholds.</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div>\u00a0</div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">== Limitations and Challenges ==</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div>\u00a0</div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div>* <ins class=\"diffchange diffchange-inline\">'''Hallucinations'''</ins>: <ins class=\"diffchange diffchange-inline\">Generating plausible but factually incorrect information.</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">* '''Context window''' limits (though rapidly expanding to 1M+ tokens).</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">* '''Bias and toxicity''' inherited from training data.</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">* '''High computational cost''' for training and inference</ins>.</div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">* '''Lack of true understanding''' \u2014 models predict patterns rather than comprehend meaning</ins>.</div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">* '''Reasoning limitations''': Struggle with complex multi-step problems without techniques like chain-of</ins>-<ins class=\"diffchange diffchange-inline\">thought prompting</ins>.</div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div>\u00a0</div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">== Notable Models ==</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div>\u00a0</div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">{| class=\"wikitable sortable\"</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">! Model !! Developer !! Parameters !! Release !! Notes</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">|-</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">| [[GPT-4]] || OpenAI || Undisclosed (~1</ins>.<ins class=\"diffchange diffchange-inline\">7T rumored) || 2023 || Multimodal, strong reasoning</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">|-</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">| [[Claude 3</ins>.<ins class=\"diffchange diffchange-inline\">5 Sonnet]] || Anthropic || Undisclosed || 2024\u20132025 || Known for safety and coding</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">|-</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">| [[Llama 3]] / [[Llama 4]] || Meta || 8B\u2013405B+ || 2024\u20132025 || Open weights</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">|-</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">| [[Grok (chatbot)|Grok]] series || xAI || Various || 2023\u20132026 || Built for maximum truth-seeking and humor</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">|-</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">| [[Gemini (chatbot)|Gemini]] || Google || Various || 2023\u20132025 || Deep integration with Google ecosystem</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">|-</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">| [[Mistral Large]] </ins>/ <ins class=\"diffchange diffchange-inline\">Mixtral || Mistral AI || Various || 2023\u20132025 || Efficient open models</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">|}</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div>\u00a0</div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">== Societal Impact ==</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div>\u00a0</div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">LLMs have transformed industries including:</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">* Software development (GitHub Copilot, Cursor)</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">* Education and research assistance</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">* Content creation and customer service</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">* Scientific discovery (e.g., AlphaFold integration, materials science)</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div>\u00a0</div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">Concerns include:</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">* Job displacement in writing, coding, and analysis roles</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">* Misinformation and deepfakes</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">* Intellectual property and copyright issues</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">* Existential risk debates regarding artificial general intelligence</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div>\u00a0</div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">== Ethical and Safety Considerations ==</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div>\u00a0</div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">Major labs implement various safety measures:</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">* Constitutional AI (Anthropic)</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">* System prompts and guardrails</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">* Red teaming for adversarial testing</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">* Watermarking and detection tools for AI-generated content</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div>\u00a0</div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">== See also ==</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">* [[Transformer (machine learning model)]]</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">* [[Generative pre-trained transformer]]</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">* [[Artificial general intelligence]]</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">* [[Prompt engineering]]</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">* [[AI alignment]</ins>]</div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div>\u00a0</div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">== References ==</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">{{Reflist}}</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div>\u00a0</div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">== External links ==</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div>* [https://<ins class=\"diffchange diffchange-inline\">arxiv</ins>.org/<ins class=\"diffchange diffchange-inline\">abs</ins>/<ins class=\"diffchange diffchange-inline\">1706.03762 \"Attention Is All You Need\"</ins>] <ins class=\"diffchange diffchange-inline\">\u2014 foundational transformer paper</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div>* [https://<ins class=\"diffchange diffchange-inline\">openai</ins>.<ins class=\"diffchange diffchange-inline\">com</ins>/<ins class=\"diffchange diffchange-inline\">research</ins>/<ins class=\"diffchange diffchange-inline\">gpt-4 GPT-4 Technical Report]</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">* Various model cards on [[Hugging Face]]</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div>\u00a0</div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">[[Category:Artificial intelligence]]</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">[[Category</ins>:<ins class=\"diffchange diffchange-inline\">Natural language processing]]</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange diffchange-inline\">[[Category</ins>:<ins class=\"diffchange diffchange-inline\">Machine learning]</ins>]</div></td></tr>\n"
}
}