AI Glossary

AI (Artificial Intelligence)AI is a subfield of computer science that deals with the simulation of intelligent behavior in computers. It studies algorithms for tasks that require human intelligence, such as problem solving, decision making, and natural language processing. AI technologies are used in search engines, robotics, finance, healthcare, and digital marketing.
AI upscalerAn AI upscaler uses machine learning to generate images in higher resolution. It analyzes the original image, identifies patterns and textures, and reconstructs these in higher resolution. In the process, it adds details that are not visible in the original image to create a sharper, more detailed picture. This technique is often used to enhance older or low-resolution images.
Backpropagation PhaseIn the backpropagation phase of a neural network, the error between the produced output and the expected value is calculated. This error is propagated backwards through the network to adjust the weights. This adjustment aims to minimize the error and improve the model’s accuracy.
BERT (Bidirectional Encoder Representations from Transformers)A language model developed by Google, introduced in 2018 and used in Natural Language Processing (NLP) technology. BERT differs from previous models with its bidirectional context capture of words in texts. There are two main versions: BERTBASE with 110 million and BERTLARGE with 340 million parameters, both pretrained on extensive text collections, including the Toronto BookCorpus and the English Wikipedia.
BERT uses an encoder-only architecture and the WordPiece tokenization system. It is pretrained on two main tasks: the prediction of randomly masked words and the prediction of the next sentence. This methodology allows BERT to capture more in-depth context information. Despite its impressive performance in various NLP tasks, BERT has limitations, such as the inability to generate text. The technology was quickly integrated into applications like Google Search and has influenced the development of further advanced language models.
Diffusion ModelDiffusion models are generative AI models for image generation based on Forward and Reverse Diffusion. Initially, noise is added to a clear image, then this noise is removed to produce a detailed image. They utilize advanced algorithms and machine learning, renowned for creating high-quality, creative images. Their strength lies in the ability to generate unique images from random noise.
DL (Deep Learning)DL, a subfield of machine learning, uses algorithms inspired by the structure and function of the brain, known as artificial neural networks. Deep Learning recognizes patterns and makes predictions from large datasets and is used in NLP, computer vision, and speech recognition.
Feed Forward Neural NetworksFeed Forward Neural Networks are a type of artificial neural networks where information flows in one direction: from input to output layers, without any feedback loops. They consist of layers of neurons, with each layer fully connected to the next.
Feed Forward PhaseThe feed forward phase is the part of a neural network where information flows from the input layer through the hidden layers to the output layer. Each neuron in the layer processes the inputs, performs calculations, and passes the result to the next layer.
Forward DiffusionForward Diffusion is a step in diffusion models where a clear image is gradually overlaid with noise. In this process, the image becomes less clear and structured, increasingly resembling random noise. The original pixels are replaced with random ones, resulting in a blurred image. This step prepares for the reversal in the process, known as Reverse Diffusion.
Foundation ModelA foundation model is a large machine learning model trained on vast amounts of data, adaptable to a wide range of tasks, significantly transforming AI system development. Examples include Google’s BERT and OpenAI’s GPT series. These models are typically trained using self-supervised or semi-supervised learning and are applied in areas like language processing, image recognition, and multimodal tasks.
GPT (Generative Pretrained Transformer)GPT is a natural language processing model trained with a large amount of text to generate new text. It is used to generate text on topics, languages, or genres and for applications such as question answering, summarization, and dialogue systems.
InterferenceIn AI, “interference” (inference) refers to the process where a trained model processes new data and makes predictions or decisions based on it. This phase is crucial for the application of the model in real-world scenarios.
LLM (Large Language Model)LLM is a machine learning model typically based on neural networks and trained on large amounts of text data to understand and generate human-like text. These models perform NLP tasks such as translation, summarization, and sentiment analysis, and capture complex language patterns.
ML (Machine Learning)ML, a type of AI, deals with the development of algorithms that can learn from data and make predictions. It is used to recognize patterns in large datasets and make predictions, with applications in NLP, computer vision, and speech recognition.
MMLU (Massive Multi-task Language Understanding)The MMLU benchmark comprises 15,908 multiple-choice questions sourced from various freely accessible online sources and manually collected by students. These questions cover 57 different subject areas, including both STEM fields and humanities and social sciences. The questions vary in difficulty from basic general knowledge to advanced expert knowledge. The test is divided into three parts: a few-shot development dataset, a validation dataset, and a test dataset, with the test dataset alone containing 14,079 questions. MMLU was developed to enable a comprehensive assessment of the understanding and problem-solving abilities of language models, presenting a significant challenge that goes beyond simple basic knowledge.
NLP (Natural Language Processing)NLP, a subfield of AI, focuses on understanding and interacting with human language. Its techniques are used in automatic text summarization, sentiment analysis, machine translation, dialogue systems, and question-answering systems, relying on machine learning, deep learning, and natural language understanding.
Reverse DiffusionReverse Diffusion is the process in diffusion models where noise is removed from a noisy image to regain a clear picture. This step employs algorithms and machine learning to progressively unravel and clarify the image. Reverse Diffusion is crucial for creating detailed images from chaotic noise, showcasing the creative power of diffusion models.
Toronto BookCorpusThe Toronto BookCorpus is a collection of digital books used for training Natural Language Processing (NLP) models. Compiled by the University of Toronto, it covers a wide range of genres and topics. This diversity makes it a useful resource for training language models like BERT. Examples of books in the corpus range from novels and non-fiction to technical literature, enabling models to develop versatile language understanding.