What is Generative AI

Generative AI is a type of artificial intelligence that involves creating machines or computer programs that can generate new content, such as images, text, or music. Unlike traditional AI systems that rely on predefined rules or pre-existing data to make decisions, generative AI models use algorithms and neural networks to learn patterns and relationships in data and generate new outputs based on that learning.


Features

Personalization

Generative AI can be used to create personalized learning experiences for each student. By analyzing data about each student’s learning style, interests, and abilities, the AI can generate content that is tailored to their individual needs. 

Accessibility

Generative AI can help make education more accessible to students with disabilities. For example, it can generate text-to-speech or sign language translations of written content.

Learning from Data

Generative AI models are trained on large amounts of data, which enables them to learn patterns, structures, and statistical relationships present in the data.

Creativity

Generative AI can produce content that is not based on existing data but rather uses learned patterns to create something new and original.

Adaptability

Generative AI can learn and adapt to new data and generate content that is consistent with the input.


Considerations

Controlling the output of generative AI models is an active area of research. While some techniques exist for steering the outputs, it remains a challenge to achieve precise control over the generated content.

Training generative AI models can require significant computational resources, especially for large-scale models like ChatGPT, resulting in system downtimes and downgraded performances. This may be a hinderance to business continuity.

Generative AI models need access to large amounts of data in order to generate new content. However, this data could include personal information, such as photos or text messages, which could be used to identify individuals. There is a risk that this data could be compromised or used for nefarious purposes

When generative AI creates new content, it can be difficult to determine who owns the resulting work. This raises questions about intellectual property rights and who has the right to use or distribute the generated content.

Generative AI can be used to create fake images or videos, or to impersonate individuals. This raises concerns about the potential for misuse, such as creating false evidence or spreading disinformation.

Generative AI models can be biased towards certain groups or types of content, which can perpetuate existing inequalities and discrimination. For example, if a generative AI model is trained on a dataset that is biased towards a particular race or gender, it may generate content that is also biased.

Generative AI models can be used to re-identify individuals in images or videos, even if they have been anonymized. This can lead to privacy invasion, particularly if the generated content is used in a way that the individual did not consent to.

The rules, guidelines, and terms of service that generative AI platforms have in place are subject to frequent updates and modifications. These policies can include a wide range of terms and conditions that govern the use of their platform’s products or services, such as pricing, privacy, and intellectual property rights.

While generative AI can personalize learning, it cannot replace the value of human interaction in education. Over-reliance on AI could lead to a lack of meaningful interactions between students and teachers, which can impact social and emotional learning.

Generative AI can generate a large amount of content quickly, but the quality and accuracy of that content may be questionable. There is a risk that AI-generated content could contain errors or perpetuate biases, which could mislead students.

Using generative AI requires access to technology and reliable internet connection. This could create a digital divide, where students who lack access to technology and the internet are left behind.


Terminology

Frequently referred to key terminology on Generative AI can be found below

AI involves creating algorithms and systems that can perform tasks which typically require human intelligence, such as visual perception, speech recognition, decision-making, and language translation. For example, a virtual assistant like Siri or Alexa uses AI to understand and respond to voice commands.

 Keywords:    Automation, Cognitive Computing, Intelligence Systems, Machine Learning,

These are pairs of neural networks that contest with each other. In a Generative Adversarial Network (GAN), one network generates new data while the other evaluates its authenticity. For instance, a GAN can create realistic images of human faces that don’t exist in reality.

 Keywords:   Deep Learning, Discriminator, GANs, Generative Adversarial Networks (GANs), Generator

In AI, agents are autonomous entities that observe their environment through sensors and act upon it with actuators to achieve specific goals. A Roomba vacuum cleaner is an example of an AI agent that navigates a room and cleans it autonomously.

 Keywords:   AI Systems, Autonomous, Autonomous Systems, Decision-making, Intelligent Agents, Robotics

AGI represents a future where AI systems can understand, learn, and apply intelligence across a wide range of tasks at a level comparable to human beings. An example would be a robot that can learn to cook, clean, converse, and play games without being explicitly programmed for each task.

 Keywords:   Cognitive AI, Human-like Intelligence, Strong AI,

This refers to AI solutions provided over the cloud, allowing users to utilize AI functionalities without managing the underlying infrastructure. An example is Google Cloud AI, which offers various AI tools that developers can integrate into their applications.

 Keywords:   AI Platforms, Cloud AI, Cloud-based AI, PaaS, Platform Services, SaaS

AI bias occurs when an algorithm produces systematically prejudiced results due to biased data or flawed assumptions. For example, a facial recognition system might fail to accurately identify individuals from certain ethnic groups if it wasn’t trained on a diverse dataset.

 Keywords:   Algorithmic Bias, Discrimination, Ethics, Unfairness

AI governance encompasses the policies, regulations, and ethical considerations that guide the responsible development and deployment of AI technologies. For instance, the European Union’s General Data Protection Regulation (GDPR) includes provisions for AI and data privacy.

 Keywords:   AI Ethics, Ethics, Oversight, Policy, Regulation

Annotation is the process of adding informative labels to data, which is essential for training machine learning models. For example, annotating images with the locations of objects within them to train an object detection model.

 Keywords:   Data Tagging, Labeling, Supervised Learning, Training Data, Training Data

An API is a set of protocols for building and interacting with software applications. For example, the Twitter API allows developers to programmatically access Twitter’s functionalities, such as posting tweets or retrieving user data.

Keywords:   Access Point, Application Programming Interface, Integration, Interface, Software, Software Interface

These are computing systems vaguely inspired by the biological neural networks that constitute animal brains. An example is a convolutional neural network (CNN) used for image recognition tasks.

 Keywords:   Activation Functions, Activation Functions, Backpropagation, Deep Learning, Neural Networks, Neurons

ASI refers to a level of AI that not only mimics but surpasses human capabilities across all domains. An example would be an AI that can invent new scientific theories or create complex art beyond human abilities.

 Keywords:   Future AI, Future AI, Singularity, Superhuman, Superhuman AI, Transcendence,

Attention mechanisms allow neural networks to focus on specific parts of input data. They are commonly used in sequence-to-sequence models, natural language processing (NLP), and image captioning. For example, In machine translation, attention helps the model align relevant words in the source and target languages. For instance, when translating English to French, attention ensures that the French word for “apple” corresponds to the correct English word.

Keywords: Focus Weighting, Context Vectors, Transformer Models

AutoML automates the process of selecting and tuning machine learning models. For example, Google’s AutoML Vision helps developers create custom image recognition models without extensive machine learning expertise.

 Keywords:   Automated Pipelines, Feature Engineering, Model Optimization, Model Selection,

Backpropagation is an optimization algorithm used to train neural networks. It adjusts the model’s weights based on the gradient of the loss function, enabling the network to learn from labeled data. For example, When training an image classifier, backpropagation updates the weights to minimize the difference between predicted and actual labels for a given image dataset.

 Keywords:  Gradient Descent, Chain Rule, Weight Update

These are graphical models that represent probabilistic relationships among variables. An example is using a Bayesian network to predict the likelihood of a disease based on various symptoms and risk factors.  Keywords:   Belief Networks, Inference, Probabilistic Graphical Models, Probabilistic Models

Benchmarking in AI involves comparing the performance of algorithms on standardized datasets and tasks. For example, the ImageNet challenge benchmarks the accuracy of image recognition algorithms.

 Keywords:   Comparison, Metrics, Performance Evaluation, Standards

In generative AI, bias can lead to the generation of data that reflects stereotypes or inaccuracies. For example, a text generator trained on biased historical texts might produce discriminatory content.

 Keywords:   Data Bias, Prejudice, Representation Bias, Skew

A bot is a software application that performs automated tasks. For example, a web crawler that indexes pages for a search engine is a type of bot.

 Keywords:   Automated Programs, Automated Software, Chatbot, Conversational AI, Conversational AI, Virtual Assistant

Capsule networks handle hierarchical relationships between features. They aim to improve image recognition and object detection by capturing spatial hierarchies.For example, In object recognition, capsule networks can recognize complex objects by considering the arrangement of their parts (e.g., wheels, body, and roof in a car).

 Keywords:  Dynamic Routing, Hierarchical Representation, Spatial Relationships

A chatbot is a software application designed to simulate conversation with human users. For example, a customer service chatbot on a company website that answers common questions.

 Keywords:   Customer Service, Dialogue System, Dialogue System, Messaging AI, NLP,

ChatGPT is a variant of the GPT model optimized for conversational responses. For example, ChatGPT can be used to power an interactive chatbot that provides human-like responses to user inquiries.

 Keywords:   Dialogue, Language Model, OpenAI, Text Generation,

In machine learning, a checkpoint is a saved state of a model during training. For example, saving a model’s weights after each epoch so training can be resumed without starting from scratch.

 Keywords:   Iteration, Iteration, Model Savepoint, Model State, Savepoint, Training Progress,

Cognitive architectures model human cognition and behavior. They create intelligent agents capable of reasoning, learning, and problem-solving. For example, A cognitive architecture could simulate how a chess player thinks, making decisions based on patterns and strategies learned from previous games.

 Keywords:  Human-like Reasoning, Knowledge Representation, Decision Making

Coherence refers to the logical and consistent flow of information in AI-generated content. For example, a coherent AI-generated story maintains a consistent plot and character development throughout.

 Keywords:   Consistency, Logical Flow, NLP, Text Quality,

Commercial software is proprietary and sold for profit, while open-source software’s source code is freely available. For example, Microsoft Office is commercial, whereas LibreOffice is open source.

 Keywords:   Accessibility, Free Software, Licensing, Proprietary, Proprietary Software

In AI, completions refer to the output produced by a model in response to an input. For example, when you type a sentence in an email, Gmail’s Smart Compose feature provides completions to finish your sentences.

 Keywords:   Autocomplete, Language Model, Output Generation, Text Prediction,

This is when an AI model generates data based on certain conditions. For example, a model might generate different styles of music based on the genre specified by the user.  Keywords:   Context-based, Context-based Output, Controlled Output, Customization, Personalization

This AI engages in dialogue with users in a natural way. For example, Google’s Duplex can make phone calls to book appointments or reservations in a conversational manner.

 Keywords:   Chatbots, Dialogue Systems, Human-like Interaction, NLP,

Data augmentation involves creating new training data from existing data through various transformations. For example, rotating or cropping images to increase the size of a dataset for image recognition tasks.

 Keywords:   Data Variety, Data Variety, Dataset Enrichment, Overfitting Prevention, Synthetic Data,

Data privacy concerns the proper handling and protection of personal data. For example, anonymizing user data before using it to train an AI model to protect individual privacy.

 Keywords:   Confidentiality, Confidentiality, Data Protection, GDPR, Security,

DBNs are generative models with multiple layers of hidden variables. For example, they can be used for feature extraction in image recognition tasks.

 Keywords:   Generative Models, Pre-training, Stacked RBMs, Unsupervised Learning,

Deep learning is a subset of machine learning involving neural networks with many layers. For example, deep learning is used in voice recognition software like Apple’s Siri.

 Keywords:   AI, Feature Learning, Hierarchical Learning, Hierarchical Learning, Neural Networks,

Deepfakes are synthetic media where a person’s likeness is replaced with someone else’s. For example, creating a video where a celebrity appears to say things they never actually said.

 Keywords:   Fake Videos, GANs, Misinformation, Synthetic Media,

A diffusion model generates data by starting with noise and gradually shaping it into a coherent structure. For example, generating a realistic image of a cat from random pixel patterns.

 Keywords:   Denoising, Generative Process, Probabilistic Model, Stochastic,

This refers to AI systems that process data locally on a device, such as a smartphone or an IoT device, rather than relying on cloud-based services. This can lead to faster response times and improved privacy. For example, a smart camera using Edge AI can analyze video footage in real-time to detect intruders without sending data to the cloud.

 Keywords:   IoT, Low-latency, On-device AI, Real-time Processing,

In machine learning, an embedding is a low-dimensional space into which high-dimensional vectors can be translated. This is often used in natural language processing to represent words or phrases. For instance, word embeddings can group similar words close together in the vector space, facilitating tasks like semantic analysis.

 Keywords:   Dimensionality Reduction, Feature Space, Vector Representation, Word Embeddings,

This architecture is commonly used in sequence-to-sequence tasks, where the encoder processes the input sequence and the decoder generates the output sequence. A practical example is a machine translation system where the encoder reads a sentence in English, and the decoder outputs the translation in French.  Keywords:   Attention Mechanism, RNNs, Sequence-to-Sequence, Translation

These optimization techniques mimic natural evolution. They evolve a population of candidate solutions to find optimal answers for complex problems. For example, In designing antennas, evolutionary algorithms explore different shapes and configurations to maximize signal reception.

 Keywords:  Genetic Programming, Fitness Function, Population Dynamics

These terms refer to the ability to understand how an AI model makes decisions. Explainable AI is crucial in sensitive areas like healthcare or finance, where understanding the model’s reasoning can be as important as the decision itself. For example, a medical diagnosis AI should provide explanations for its choices to be trusted by doctors and patients.

 Keywords:   Model Transparency, Trustworthy AI, Understandable AI, XAI

This is the ability of a model to learn from a very limited amount of data. An example is a language model that can understand new commands after being shown just a few examples, which is particularly useful when data is scarce or expensive to collect.

 Keywords:   Meta-learning, Minimal Data Training, Quick Adaptation, Transfer Learning,

Fine-tuning involves adjusting a pre-trained model to perform a specific task. For example, a model trained on general English text could be fine-tuned on legal documents to perform better legal document analysis.

 Keywords:   Model Adjustment, Personalization, Retraining, Specialization,

These are large, pre-trained models that serve as a starting point for various AI tasks. For example, BERT is a foundation model that can be used for a range of natural language processing tasks, from sentiment analysis to question answering.

 Keywords:   Baseline Models, Large-scale, Large-scale Models, Pre-trained, Pre-trained Models, Transfer Learning,

This term describes the most advanced AI research and applications, pushing the limits of what’s possible. An example could be AI systems that can create art or music that is indistinguishable from that created by humans.

 Keywords:   Advanced AI, Cutting-edge AI, Innovation, Next-gen Technologies,

Fuzzy logic is used in systems that require a form of decision-making that resembles human reasoning, dealing with reasoning that is approximate rather than fixed and exact. For instance, a fuzzy logic-based thermostat might decide on the heating level based on “slightly cold” or “very hot” instead of specific temperatures.

 Keywords:   Approximate Reasoning, Control Systems, Fuzzy Sets, Uncertainty,

Generalization is the ability of an AI model to perform well on new, unseen data. For example, a facial recognition system that can accurately identify faces it has never seen before has good generalization.

 Keywords:   Learning Transfer, Model Robustness, Overfitting Avoidance, Universal Application,

This type of AI creates new content. For example, DeepArt.io uses generative AI to turn photos into artworks in the style of famous painters.

 Keywords:   Content Creation, Creative AI, GANs, Synthetic Data

These models generate new data similar to the training data. For instance, This Person Does Not Exist website uses a GAN to create images of people who don’t exist.

 Keywords:   GANs, Probabilistic Models, Unsupervised Learning, VAEs,

GPT models are designed to generate text. For example, OpenAI’s GPT-3 can write creative fiction, answer questions, and even generate code based on prompts it receives.

 Keywords:   Autoregressive, Language Model, OpenAI, Text Generation,

In GANs, the generator creates new data. For example, a generator could create new images of cats that look real but do not correspond to any real cat.

 Keywords:   Creativity, Data Production, GAN Component, Synthetic Output,

GitHub is a platform for hosting and collaborating on code. An example use case is a team of developers working together on an open-source project, using GitHub for version control and issue tracking.

 Keywords:   Code Repository, Collaboration, Open Source, Version Control,

In AI, hallucination refers to generating incorrect or nonsensical information. For example, a language model might “hallucinate” facts in a news article generation task, creating plausible-sounding but false information.

 Keywords:   AI Error, False Information, Inaccuracy, Misleading Output,

These models process data at multiple levels of abstraction. For example, in natural language processing, a hierarchical model might first analyze words, then sentences, and finally entire paragraphs to understand the meaning of a text.

 Keywords:   Complexity, Deep Learning, Layered Learning, Structured Data,

The process of tuning hyperparameters (e.g., learning rates, batch sizes) to improve model performance. For example, Adjusting the learning rate in a neural network to find the balance between fast convergence and avoiding overshooting the optimal solution.

 Keywords:  Grid Search, Random Search, Bayesian Optimization

Inference is the process of using a trained model to make predictions. For example, after training a model to recognize spam emails, inference would be the model scanning incoming emails and flagging them as spam or not.

 Keywords:   AI Reasoning, Decision-making, Model Deployment, Prediction,

Jailbreaking allows users to remove manufacturer-imposed software restrictions. For example, jailbreaking an iPhone allows users to install apps not available on the Apple App Store.

 Keywords:   Customization, Model Modification, Restrictions Bypass, Unofficial Access,

This is a measure that gives the probability of two or more events happening at the same time. For example, in a card game, the joint probability distribution could be used to calculate the likelihood of drawing two specific cards in a row.

 Keywords:   Multivariate Analysis, Probabilistic Models, Statistical Relationships,

This is the point in time until which an AI model has been trained. For example, a model trained on news articles up to December 31, 2021, would not have knowledge of events occurring in 2022.

 Keywords:   Data Update, Information Limit, Model Training, Recency,

This is a measure of how one probability distribution differs from a second, reference probability distribution. For example, it can be used in machine learning to measure how well a model’s predicted probabilities match the actual distribution of the data.

 Keywords:   Distribution Similarity, Information Theory, Model Fitting, Relative Entropy,

Neural networks trained to predict the next word in a sequence of text. For example, Autocomplete suggestions in search engines or predicting the next word in a sentence while typing.

 Keywords:  Natural Language Understanding, Text Generation, Probabilistic Prediction

LVMs are designed to understand and process visual data at a large scale. For example, Google’s Vision AI uses LVMs to analyze images and videos to detect objects, faces, and even emotions.

 Keywords:   Computer Vision, Image Processing, Large-scale Models, Visual Understanding,

Latent space refers to the representation of compressed data in a lower-dimensional space. For example, in a variational autoencoder (VAE), the latent space captures the essence of the input data, which can then be used to generate new data that has similar properties.

 Keywords:   Compressed Knowledge, Feature Representation, Hidden Variables, Model Internals,

Large Language Models are trained on vast amounts of text data and can perform various language tasks. For example, GPT-3 is an LLM that can write essays, translate languages, and even generate computer code.

 Keywords:   Advanced NLP, AI Writing, Large Language Models, Text Understanding,

ML is a subset of AI that enables machines to improve at tasks through experience. For example, Netflix uses ML to recommend movies and TV shows based on your viewing history.

 Keywords:   AI, Algorithms, Data Analysis, Predictive Modeling,

MCMC is a method for sampling from probability distributions. For example, it can be used in Bayesian statistics to estimate the posterior distribution of model parameters.

 Keywords:   Bayesian Analysis, Sampling, Statistical Inference, Stochastic Processes,

Training models to learn how to learn quickly. Useful for adapting to new tasks with limited data. For example, A meta-learner that can quickly adapt to recognize new types of animals with minimal labeled examples.

 Keywords:  Learning to Learn, Task Generalization, Few-shot Learning

This is an ensemble technique where multiple models, each an “expert” in a different area, are combined. For example, in a recommendation system, one expert might specialize in new users while another focuses on long-term preferences.  Keywords:   Decision Routing, Ensemble Learning, Model Complexity, Specialized Sub-models,

In machine learning, a model is the result of the training process and consists of the algorithm and learned parameters. For example, a spam filter is a model that has learned to classify emails as spam or not spam.

 Keywords:   AI System, Algorithm, Machine Learning, Neural Network,

Deploying machine learning models in production to serve predictions or recommendations to end-users. For example, An e-commerce website using a recommendation model to suggest personalized products to users.  Keywords:  Deployment, Inference, API Integration

These models can process and relate information from different types of data. For example, Facebook’s AI can analyze both text and images in a post to understand its content better.

 Keywords:   Cross-modal, Cross-modal Learning, Integrated Learning, Rich Data Interpretation, Sensory Fusion

NLG is the process of generating coherent and contextually relevant text. For example, a weather bot that generates a weather report based on data it receives.

NLP is the field of AI that focuses on the interaction between computers and humans through language. For example, Google Translate uses NLP to translate text from one language to another.  Keywords:   Data Narration, Language Model, Storytelling, Storytelling, Text Creation

These are algorithms modeled after the human brain, designed to recognize patterns. For example, neural networks are used in self-driving cars to interpret sensor data and make driving decisions.

 Keywords:   AI Communication, Linguistics, Speech Recognition, Text Analysis,

Combining neural networks and evolutionary algorithms to evolve architectures or optimize weights. For example, Evolving neural network architectures for game-playing agents (e.g., evolving the structure of a chess-playing neural network).  Keywords:  Genetic Algorithms, Neural Topology, Adaptive Networks

Parameters are the parts of the model that are learned from the data. For example, in a neural network, the weights and biases are parameters that are adjusted during training.

 Keywords:   Configuration, Hyperparameters, Model Weights, Tuning,

A PDF describes the likelihood of a random variable taking on a given value. For example, the bell curve is a PDF that describes the distribution of scores on a test.

 Keywords:   Analysis, Likelihood, Random Variables, Statistical Distribution,

A prompt is an input given to an AI model to generate a specific output. For example, when you ask a virtual assistant to “play the latest news,” the spoken request is the prompt.  Keywords:   Command, Input Query, Instruction, Language Model Activation,

This is the practice of designing prompts to effectively communicate with AI models. For example, carefully crafting a prompt can improve the relevance and quality of the responses from a chatbot.

 Keywords:   AI Interaction, Command Design, Effective Queries, Input Optimization,

A public model is an AI model that is available for anyone to use. For example, OpenAI offers public access to GPT-3 through an API, allowing developers to build applications that leverage its language generation capabilities.

 Keywords:   Collaborative Development, Community Model, Open Access, Shared AI,

Quantization involves reducing the precision of the inputs, weights, and activations of models to accelerate computation and reduce memory usage. For example, quantizing a neural network might involve reducing the precision of its weights from 32-bit floating-point numbers to 8-bit integers.

 Keywords:   Bit Reduction, Efficient Computing, Model Compression, Performance Optimization,

These are AI models that use principles of quantum computing to generate data. For example, a quantum generative model might simulate the properties of new materials at the atomic level.

 Keywords:   Advanced Algorithms, Entanglement, Quantum Computing, Superposition,

Reasoning is the process by which AI systems draw inferences appropriate to the situation. For example, an AI playing chess uses reasoning to decide its moves.

 Keywords:   AI Cognition, Decision-making, Logical Thinking, Problem Solving,

RNNs are designed to handle sequential data, such as time series or language. They have the unique feature of using their internal state (memory) to process sequences of inputs. This makes them ideal for tasks like speech recognition or language modeling. For example, an RNN could be used to predict the next word in a sentence.

 Keywords:   Feedback Loops, Memory, Sequential Data, Time Series,

RL involves training models to make a sequence of decisions by rewarding them for good decisions and penalizing them for bad ones. It’s used in various applications, from playing video games to robotic hands learning to manipulate objects. For instance, AlphaGo, the program that defeated the world champion in Go, used RL.

 Keywords:   Agent Training, AI Games, Decision-making, Reward-based Learning,

RAG combines the power of language models with external knowledge sources to generate more informed and accurate outputs. For example, when asked a question, RAG can retrieve relevant information from a database before generating an answer, leading to responses that are both coherent and contextually rich.

 Keywords:   Enhanced Generation, Information Retrieval, Knowledge Integration, NLP,

Reinforcement Learning from Human Feedback is a training strategy where an AI model is fine-tuned based on human feedback, improving its performance on tasks that are difficult to quantify. For example, OpenAI used RLHF to train language models to follow instructions better and produce more helpful responses.

Keywords: Behavioral Shaping, Interactive Learning, Reinforcement Learning from Human Feedback, User-guided AI,

This refers to an AI model’s ability to cope with errors, changes, or uncertainties in its input data. A robust facial recognition system, for example, can still identify individuals accurately even when the image is partially obscured or taken in different lighting conditions.

 Keywords:   Adversarial Defense, Error Tolerance, Model Stability, Reliability,

Sampling is a statistical method used to select a representative subset from a larger population for analysis. In AI, sampling methods are used in various algorithms to estimate the properties of datasets. For example, Monte Carlo methods use sampling to approximate the value of complex integrals.

 Keywords:   Data Selection, Distribution, Randomness, Statistical Inference,

This mechanism allows models, particularly in the transformer architecture, to focus on different parts of the input sequence when producing each element of the output sequence. It’s like reading a sentence and focusing on relevant words to understand the context. For example, in machine translation, self-attention helps the model focus on the relevant words in the source sentence when translating to the target language.

 Keywords:   Context Awareness, Focus Mechanism, Sequence Analysis, Transformer Models,

A semantic network is a graphical representation of knowledge that interconnects concepts through relationships. For example, a semantic network could represent the relationship between “doctor,” “hospital,” and “medicine,” helping AI systems understand the context and associations between these concepts.

 Keywords:   Cognitive Maps, Conceptual Graphs, Knowledge Representation, Meaning Connections,

This is the process of computationally determining whether a piece of writing is positive, negative, or neutral. For example, businesses use sentiment analysis on customer reviews to understand public opinion about their products or services.

 Keywords:   Emotion Detection, NLP, Opinion Mining, Text Evaluation

The technological singularity is a hypothetical point in the future when technological growth becomes uncontrollable and irreversible, potentially leading to unfathomable changes in human civilization. For example, the creation of an ASI (Artificial Super Intelligence) could trigger the singularity.

 Keywords:   AI Milestone, Future Prediction, Superintelligence, Technological Transformation,

This concept involves designing AI systems whose goals are aligned with human values, ensuring they act in ways that are beneficial to humanity. For example, ensuring an AI tasked with environmental protection prioritizes ecological balance and human well-being.

Keywords:   AI Ethics, Responsible Development, Safe AI, Value Alignment,

In supervised learning, models learn from labeled training data, allowing them to predict outcomes or categorize data. For example, a spam filter is trained with labeled emails to learn what constitutes spam versus non-spam.

 Keywords:   Classification, Labeled Data, Regression, Regression, Training,

In AI, this refers to the connections between artificial neurons in neural networks that simulate the connections between biological neurons in the human brain. These connections are crucial for learning and memory in AI systems.

 Keywords:   Brain-inspired AI, Learning Pathways, Network Dynamics, Neural Connections,

In generative models, temperature is a hyperparameter that controls the randomness of predictions. A low temperature makes the model more confident but less diverse, while a high temperature makes it more diverse but less reliable. For example, when generating text, a higher temperature might produce more creative and varied sentences.

 Keywords:   Creativity Tuning, Output Variation, Probability Scaling, Sampling Control,

TTS technologies convert written text into spoken words, while STT does the reverse. For example, TTS is used in voice assistants to read text aloud, and STT is used to transcribe spoken words into text.

 Keywords:   Accessibility, Audio Processing, Speech Recognition, Voice Generation,

In the context of language models, a token is the basic unit of data that the model processes, which could be a word, part of a word, or a character. For example, the sentence “Hello, world!” might be split into tokens like [“Hello”, “,”, “world”, “!”].

 Keywords:   Data Unit, Encoding, Language Model, NLP, Text Segment,

Topic modeling is a type of statistical model used to discover abstract topics within a collection of documents. For example, it can be used to identify common themes in a set of news articles.

 Keywords:   Document Clustering, Latent Topics, Subject Extraction, Text Mining

In AI, TOPS refers to Tera Operations Per Second, a measure of the computational power of processors, indicating how many trillion operations they can perform per second. This is relevant for evaluating the performance of hardware running AI algorithms.

 Keywords:   AI Performance, Computing Power, Hardware Metrics, Tera Operations Per Second,

Training is the process of teaching an AI model to make predictions or decisions. It involves showing the model examples of input-output pairs and adjusting the model’s parameters to minimize error. For example, training a neural network to recognize images of cats involves showing it many pictures of cats and not-cats.

 Keywords:   AI Education, Data Fitting, Learning Process, Model Development,

The transformer is a neural network architecture that uses self-attention to weigh the influence of different parts of the input data. It has been highly successful in NLP tasks. For example, the GPT-3 model by OpenAI is a transformer that can generate human-like text.

 Keywords:   Attention Mechanism, Model Architecture, NLP, Parallel Processing,

Tuning involves adjusting the hyperparameters of an AI model to improve its performance. For example, tuning the learning rate of a neural network might involve finding the rate that allows the network to learn quickly without overshooting the minimum error.

 Keywords:   Customization, Model Optimization, Parameter Adjustment, Performance Enhancement,

In unsupervised learning, models learn from data without labels. They try to find structure in the data, like clustering similar items together. For example, grouping customers by purchasing behavior without prior knowledge of the groups.

 Keywords:   Association, Clustering, Pattern Discovery, Self-organized Learning,

VAEs are generative models that use variational inference to create new data points similar to the input data. For example, a VAE could be used to generate new images of faces that resemble the faces in its training dataset.

 Keywords:   Data Reconstruction, Generative Networks, Latent Representation, Unsupervised Learning,

WGANs are a type of GAN that uses the Wasserstein distance to improve the stability and quality of the generated samples. For example, WGANs can generate more realistic images compared to traditional GANs.

 Keywords:   Improved GANs, Loss Function, Quality Generation, Stable Training,

Zero-shot learning refers to the ability of a model to correctly perform tasks it has not explicitly been trained to do. For example, a model trained to recognize various animals might also recognize an animal it has never seen before, like a platypus, based on its learned features of animals.

 Keywords:   Learning without Examples, Model Generalization, Novel Classes, Transfer Learning,

KPU Guidelines for Use of Generative AI