Thanks to visit codestin.com
Credit goes to link.springer.com

Skip to main content

Introduction to Foundation Models

  • Textbook
  • © 2025

Overview

  • Standard book on foundation models and generative AI, with a specific emphasis on AI risks in trust and safety
  • Gain insights into basic and advanced topics in foundation models, including LLMs and diffusion models
  • Uses mathematical expressions and algorithms including techniques like optimization, transformers, prompt engineering
  • 7676 Accesses

  • 1 Citation

This is a preview of subscription content, log in via an institution to check access.

Access this book

eBook GBP 47.99
Price includes VAT (United Kingdom)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book GBP 59.99
Price includes VAT (United Kingdom)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Other ways to access

Licence this eBook for your library

Institutional subscriptions

About this book

This book offers an extensive exploration of foundation models, guiding readers through the essential concepts and advanced topics that define this rapidly evolving research area. Designed for those seeking to deepen their understanding and contribute to the development of safer and more trustworthy AI technologies, the book is divided into three parts providing the fundamentals, advanced topics in foundation modes, and safety and trust in foundation models:

  • Part I introduces the core principles of foundation models and generative AI, presents the technical background of neural networks, delves into the learning and generalization of transformers, and finishes with the intricacies of transformers and in-context learning.

  • Part II introduces automated visual prompting techniques, prompting LLMs with privacy, memory-efficient fine-tuning methods, and shows how LLMs can be reprogrammed for time-series machine learning tasks. It explores how LLMs can be reused for speech tasks, how synthetic datasets can be used to benchmark foundation models, and elucidates machine unlearning for foundation models.

  • Part III provides a comprehensive evaluation of the trustworthiness of LLMs, introduces jailbreak attacks and defenses for LLMs, presents safety risks when find-tuning LLMs, introduces watermarking techniques for LLMs, presents robust detection of AI-generated text, elucidates backdoor risks in diffusion models, and presents red-teaming methods for diffusion models.

Mathematical notations are clearly defined and explained throughout, making this book an invaluable resource for both newcomers and seasoned researchers in the field.

 

Similar content being viewed by others

Table of contents (18 chapters)

  1. Fundamentals of Foundation Models

  2. Advanced Topics in Foundation Models

  3. Trust and Safety in Foundation Models

Authors and Affiliations

  • IBM Research, Yorktown Heights, USA

    Pin-Yu Chen

  • College of Engineering, Michigan State University, Sunnyvale, USA

    Sijia Liu

About the authors

Dr. Pin-Yu Chen is a principal research scientist at IBM Thomas J. Watson Research Center, Yorktown Heights, NY, USA. He is also the chief scientist of RPI-IBM AI Research Collaboration and PI of ongoing MIT-IBM Watson AI Lab projects. Dr. Chen received his Ph.D. in electrical engineering and computer science from the University of Michigan, Ann Arbor, USA, in 2016. Dr. Chen’s recent research focuses on adversarial machine learning of neural networks for robustness and safety. His long-term research vision is to build trustworthy machine learning systems. He received the IJCAI Computers and Thought Award in 2023. He also received the IEEE GLOBECOM 2010 GOLD Best Paper Award and UAI 2022 Best Paper Runner-Up Award. At IBM Research, he received several research accomplishment awards, including IBM Master Inventor, IBM Corporate Technical Award, and IBM Pat Goldberg Memorial Best Paper. He is a co-author of the book “Adversarial Robustness for Machine Learning”. He is currently on the editorial board of Transactions on Machine Learning Research and IEEE Transactions on Signal Processing. He is also an Area Chair of several AI and machine learning conferences, and a Distinguished Lecturer of ACM.

Dr. Sijia Liu is currently an Assistant Professor in the CSE department at Michigan State University and an Affiliated Professor at IBM Research. His primary research interests include trustworthy and scalable machine learning (ML), with a recent focus on machine unlearning. He has been recognized with several prestigious awards, including the NSF CAREER award in 2024, the Best Paper Runner-Up Award at the Conference on Uncertainty in Artificial Intelligence (UAI) in 2022,  and the Best Student Paper Award at the 42nd IEEE ICASSP in 2017. He has published over 70 papers in top ML/AI conferences based on his record in CSRanking and co-organized several tutorials and workshops on trustworthy and scalable ML.

Accessibility Information

PDF accessibility summary

This PDF does not fully comply with PDF/UA standards, but does feature limited screen reader support, described non-text content (images, graphs), bookmarks for easy navigation and searchable, selectable text. Users of assistive technologies may experience difficulty navigating or interpreting content in this document. We recognize the importance of accessibility, and we welcome queries about accessibility for any of our products. If you have a question or an access need, please get in touch with us at [email protected].

EPUB accessibility summary

This ebook is designed with accessibility in mind, aiming to meet the ePub Accessibility 1.0 AA and WCAG 2.0 Level AA standards. Its features include described images and other non-text content, screenreader-friendly navigation and accessible math. Math is represented either as MathML, LaTeX or in images. If math is represented as image, Alt Text might not be present. We recognize the importance of accessibility, and we welcome queries about accessibility for any of our products. If you have a question or an access need, please get in touch with us at [email protected].

Bibliographic Information

  • Book Title: Introduction to Foundation Models

  • Authors: Pin-Yu Chen, Sijia Liu

  • DOI: https://doi.org/10.1007/978-3-031-76770-8

  • Publisher: Springer Cham

  • eBook Packages: Artificial Intelligence (R0)

  • Copyright Information: The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2025

  • Hardcover ISBN: 978-3-031-76769-2Published: 13 June 2025

  • Softcover ISBN: 978-3-031-76772-2Due: 27 June 2026

  • eBook ISBN: 978-3-031-76770-8Published: 12 June 2025

  • Edition Number: 1

  • Number of Pages: XIII, 310

  • Number of Illustrations: 55 b/w illustrations

  • Topics: Artificial Intelligence, Machine Learning, Natural Language Processing (NLP)

Keywords

Publish with us