Thanks to visit codestin.com Credit goes to technically.dev
Technically exists to help you get better at your job by becoming more technically literate.
The guide to AI that you've been looking for. Click on any term for an in depth explanation and FAQ (plus diagrams).
A context window is how much data an AI model can hold in memory at once.
Embeddings are how AI models turn words, images, or other data into mathematical coordinates that computers can actually work with.
Instructional fine-tuning is how you turn a pre-trained AI model – knowledgeable, but useless – into a helpful assistant that actually answers your questions.
A token is the basic unit of a Large Language Model's vocabulary.
Fine tuning is the process of taking a pre-trained AI model and specializing it for your specific use case.
A loss function is something you design that tells the model when its answers are right and when they're wrong
Post-training turns a model from a knowledgeable blob that produces rambling answers, into a helpful assistant.
Pre-training is the "undergrad degree" phase where the model builds its foundational knowledge and world model.
RLHF is the final training step that turns a knowledgeable but rambling AI into the helpful assistant you know and love.
Training is the process of creating an AI model and teaching it how to actually do something useful.
Training datasets are the examples you show an AI model so it can learn to recognize patterns and make predictions.
AI hallucination is when AI models confidently generate information that's completely made up or wrong.
Inference is a fancy term that just means using an ML model that has already been trained.
Prompt engineering is the art of talking to AI models in a way that gets you the results you actually want.
Retrieval Augmented Generation (RAG) is a way to make LLMs like GPT-4 more accurate and personalized to your specific data
AI reasoning is how artificial intelligence systems solve problems, think through complex situations, and draw conclusions from available information.
Neural networks are the mathematical brains behind modern AI—think of them as simplified versions of how your actual brain processes information.
Parameters are the learned "knowledge" stored inside AI models—the numerical values that determine how the model responds to inputs.
Transformers are the neural network architecture that powers modern AI like ChatGPT, revolutionizing how models process language and other sequential data.
GPUs are specialized chips that do thousands of simple calculations simultaneously, making them perfect for AI training
70K+ product managers, marketers, bankers, and other -ers read Technically to understand software and work better with developers.