A small, in-house sensitization project about agentic (autonomous) AI: what it is, how it differs from "classic" LLM chatbots, and why it matters—especially from a security and organizational risk perspective in research environments.
Disclaimer (WIP)
This document/project is work in progress (WIP) and the information provided is not complete and may be incorrect or outdated.
It does not constitute legal advice.
It is not a guidance and not a security best practice.
It is intended as an impulse for discussion and internal awareness building.
- Understanding the Layers of the LLM Ecosystem — A comprehensive guide that explains the four-layer model of the LLM ecosystem (Foundation Models, APIs, Agents, Applications) with concrete examples and decision frameworks. Provides foundational knowledge of the LLM landscape that underpins agentic AI security concepts.
- A short lecture-style write-up introducing autonomous agentic AI concepts
- An investigation-oriented narrative focused on security concerns (e.g., prompt injection, data exfiltration, plugin/integration risks)
- Supporting notes/materials used to communicate these ideas in a compact format
Agentic AI systems can plan and act via tools and integrations (messengers, APIs, files), which expands the attack surface compared to answer-only chatbots. This project aims to help readers recognize common risks and build practical awareness for safer adoption.
- Researchers, staff in organizations (e.g., universities) evaluating or encountering agentic AI tools
- Anyone needing a quick, non-alarmist introduction to agentic AI security implications
Read the lecture notes in this folder from top to bottom. The material is designed to be digestible in a short session and to spark internal discussion about safe usage and mitigations.
Autonomous Agentic AI in Core Facilities: Concepts and Security (Lecture Notes) © 2026 by Sven Fillinger is licensed under Creative Commons Attribution 4.0 International.