Vsebina predavanj
Advanced Threat Hunting: Staying One Step Ahead of Adversary
As cybersecurity defenders, our job is not just to react but to stay ahead of attackers. Yet, adversaries continue to evolve, refining their techniques to bypass defenses and infiltrate critical systems. To effectively hunt threats, we must understand how these attackers think and operate.
This session will explore real-world techniques used by malicious actors to breach security controls. We will examine how stolen data such as compromised session tokens and credentials are weaponized to gain unauthorized access to systems and supply chains. We’ll also uncover how attackers bypass restricted registration requirements, exploiting gaps in verification and automation processes. We will also analyze how logic flaws in authentication mechanisms allow threat actors to circumvent security controls, gaining entry where they shouldn’t. And much more.
By breaking down these attack strategies, you will learn how to identify, track, and neutralize emerging threats before they cause damage. This session will equip you with practical threat-hunting insights, showing you how to turn an attacker’s own methods against them before they strike.
Adversary Emulation: Simulating APTs, Ransomware, and Emerging Threats
While threat reports document advanced persistent threat (APT) activity, most red team simulations fail to capture the conditions, tool chains, and environmental assumptions adversaries relied upon—creating defensive gaps. This presentation demonstrates how to extract operational intent from cyber threat intelligence and translate it into authentic, repeatable simulations using frameworks like Atomic Red Team and CALDERA.
Using APT29 as a case study, we’ll walk through building actor-specific profiles and implementing tactics that reflect actual adversary constraints. Attendees will receive a threat actor profile template and framework configurations ready to customize for their specific threat landscapes.
AI BYPASS: How to gain a phisical access in 15 seconds
Despite the widespread adoption of AI-based security solutions, physical attacks on network infrastructure remain fast, effective, and dangerously underestimated.
The speaker will deliver a live demonstration showing how network security can be bypassed in as little as 15 seconds using a simple hardware tool. The presentation focuses on Layer 2 and Layer 3 attacks, revealing how physical access combined with low-level network exploitation can lead to immediate unauthorized entry.
The session will highlight why AI-driven security systems often fail to detect L2/L3 attacks, and will discuss practical ways to reduce the risk of physical breaches through improved monitoring, segmentation, and defensive controls.
By combining real-time exploitation with defensive insights, this talk demonstrates why physical access and low-level network attacks still play a critical role in modern cybersecurity, even in the age of AI.
Artificial Intelligence for Hacking
* Automated Vulnerability Scanning and Exploitation: AI detects vulnerabilities and autonomously selects or creates the appropriate exploit to validate them.
* Self-Updating Exploit Arsenal: AI retrieves, adapts, and standardizes public exploits from online sources without human input, maintaining an up-to-date library.
* Fuzzing and Injection Testing: AI performs intelligent fuzzing and injection (e.g., SQLi, XSS) to uncover and verify application vulnerabilities.
* Exploit Reprogramming: AI modifies and sanitizes exploit scripts to ensure safe execution and compatibility with the platform.
* Multi-Agent Orchestration: Multiple AI agents collaborate to coordinate scanning, exploitation, and refinement loops for more effective penetration tests.
Attack of the Clones: 80+ AI Agents Walk Into a SOC
What happens when you stop waiting for the next "AI SOC revolution" and just build your own clone army instead?
This talk tells the story of how one small SecOps team turned years of internal playbooks, tribal knowledge, and automation scripts into an Agentic Threat Management Framework — a swarm of 80+ AI agents that think, correlate, and report like seasoned analysts (with the added benefit of no coffee breaks).
We'll dive into the why behind building an in-house AI SOC — the frustration with black-box "AI security" hype, the need for transparency, and the joy of making something that actually works in a real, human lead environment, with all its inherent flaws and inconsistencies. We will share our own hard-won lessons:
- how to agentify your own security knowledge,
- orchestrate your agents on the battlefield,
- keep your AI explainable and traceable,
- and, the most important, transform the human SOC Analyst in an AI developer/prompt engineer.
By the end, you'll see how building your own AI SOC is about AI empowering humans and not the other way around.
CSRF attacks in modern Web applications
Cross-Site Request Forgery (CSRF) has long been a high severity threat to web applications, enabling attackers to execute unauthorized actions on behalf of authenticated users. While traditional CSRF mitigation techniques, such as anti-CSRF tokens and SameSite cookies, have improved web security, different application architectures and new research from the community introduced new challenges that can lead to overlooked vulnerabilities.
This talk explores the evolution of CSRF attacks in the context of modern web technologies, such as Single Page Applications. Additionally, the talk will assess how browser security mechanisms protect their users against CSRF attacks and how to potentially bypass them.
Exploiting Digital Energy at Level 0
The convergence of the digital and physical worlds has opened a physics-based attack surface that traditional cybersecurity does not address, particularly at the foundational Purdue Level 0. We define this new vulnerability through digital energy: the physical manifestation of computation. Our core argument is that manipulating this energy—through electromagnetic interference or mechanical force—allows attackers to side-step software defenses and compromise operational technology. Because advanced threats may exploit the physical environment to disrupt vital sensors and actuators, security must undergo a fundamental shift. The way forward is the urgent integration of physical layer security monitoring to protect critical infrastructure at its deepest level.
From Ghosts in the Code to Phantoms in the Machine: GenAI Inside Our Cars, Factories, and Cities
We increasingly live inside “soft” infrastructure. Modern vehicles, factories, energy systems, and city infrastructures are increasingly orchestrated by layers of software. Cars are software platforms; buses and trains run on digital control systems; factories are networks of programmable machines; and entire cities depend on interconnected networks of programmable machines which now are influenced or generated by AI. In such environments, a single malfunctioning component, hidden dependencies, unexpected interactions, and risks of remote shutdowns or unintended behavior; "ghosts in the code" if you like, can act as a kill switch, halting production lines, immobilizing fleets, or shutting down essential services. Previously, these kill-switch scenarios came from bugs, misconfigurations, or deliberate sabotage. But now Generative AI adds a new layer of complexity. AI can write code, design configurations, synthesize sensor data, or autonomously make operational decisions. As such, GenAI can be the guardian that detects anomalies faster than humans, or it can unintentionally embed vulnerabilities that only surface once deployed into the physical world. s physical infrastructure becomes more autonomous, the line between accident, malfunction, and attack becomes dangerously thin. Understanding how GenAI reshapes the kill-switch risk is essential for safety, security, and trust in modern digital infrastructure.
Novosti na področju pravne ureditve in izzivov varstva zasebnosti ter umetne inteligence
S sprejemom Zakona o izvajanju uredbe (EU) o določitvi harmoniziranih pravil o umetni inteligenci (ZIUDHPUI) bo Informacijski pooblaščenec kot organ za nadzor trga pristojen za nadzor nad prepovedanimi sistemi in določenimi visoko tveganimi sistemi umetne inteligence, v EU pa spremembe tako na področju varstva osebnih podatkov kot umetne inteligence prinaša t.i. digitalni omnibus. Kaj nas torej čaka v bližnji prihodnosti - več ali manj regulacije in kakšna bo?
Pametna varnost: Kako adaptivna avtentikacija spreminja igro
Z naraščajočo kompleksnostjo digitalnih ekosistemov in porastom kibernetskih groženj postaja klasična avtentikacija vse manj učinkovita. Gesla, večfaktorska avtentikacija in statični varnostni mehanizmi pogosto ne zadostujejo proti naprednim napadom, kot so kraja identitete, napadi z izčrpavanjem poverilnic in socialni inženiring. Na predavanju bomo raziskali koncept adaptivne avtentikacije, ki dinamično prilagaja varnostne zahteve glede na kontekst uporabnika, tveganje in vedenjske vzorce. Analizirali bomo ključne komponente adaptivne avtentikacije, kot so ocena tveganja v realnem času, uporaba strojnega učenja za zaznavanje anomalij ter integracija biometričnih in kontekstualnih podatkov. Predstavili bomo primere napadov, ki jih lahko prepreči adaptivni pristop, ter o izzivih pri implementaciji.
Pomembni koraki pri zagotavljanju varne uporabe umetne inteligence v organizaciji
Umetna inteligenca (UI) prinaša številne priložnosti, a hkrati odpira nova varnostna tveganja, ki jih organizacije ne smejo spregledati. Predavanje bo predstavilo ključne izzive in rešitve za varno rabo UI – od strateškega upravljanja do tehničnih kontrol, zakaj UI ni privzeto varna, kako se braniti pred napadi ter preprečiti zlorabe modelov in podatkov. Udeleženci bodo spoznali orodja za testiranje in monitoring, praktične primere napadov ter nasvete za integracijo varnostnih mehanizmov.
Quantum-Proofing Images: Stopping Fake News in a Synthetic Media Age
The emergence of quantum computing threatens to invalidate current cryptographic mechanisms, creating urgent challenges for maintaining digital authenticity. Concurrently, deepfakes and manipulated imagery continue to erode public trust. We introduce Post-Quantum VerITAS, a provenance-preserving system engineered to remain secure in both classical and post-quantum threat models. Leveraging lattice-based hash constructions, post-quantum zero-knowledge proofs, and CRYSTALS-Dilithium signatures, the system maintains verifiable provenance even under quantum-capable adversaries.
In contrast to existing standards such as C2PA—which lack robustness against both image transformations and quantum cryptanalysis—Post-Quantum VerITAS offers a decentralized, quantum-resistant framework capable of verifying images after common edits. This presentation details the system’s cryptographic design, security guarantees, and resistance to quantum attacks, and discusses pathways for deploying quantum-secure provenance verification at scale.
Quishing Without Compromise: Scoping, Tools, Tricks, and Lessons Learned
Red teaming can be challenging especially when simulating real-world attacks like QR code phishing (“quishing”) within a tightly defined scope. How do you credibly launch a phishing campaign without wanting to know the specific targets, exposing sensitive information, or putting unintended users at risk? This session offers a behind-the-scenes look at how our team tackled these constraints. We will dig into some opensource tools that can be used and some custom tweaks that we made to make it more secure / believable and the pitfalls you can hopefully avoid. We will walk you through our attack chain:
(1) Redirector and how to filter the bots away
(1) Using a customized EvilGinx instance to verify the scope
(2) Creating a believable landing page for our targets,
(3) Lessons learned and possible automated attacks.
Secure-by-design: Building cyber-resilient products that meet UX, security, and emerging compliance standards
Security engineering isn’t enough anymore—products must now satisfy complex UX needs, evolving threat landscapes, and tightening compliance regimes. This talk unpacks how product managers and security teams can jointly build secure-by-design systems while aligning with frameworks like GDPR, the EU Cyber Resilience Act, and the upcoming EU AI Act.
We’ll cover secure defaults, data-minimization patterns, auditability requirements, model risk controls, and how to design security features that remain compliant as regulations shift, without slowing delivery or harming usability.
Securing Cloud-Native Supply Chains: Strategies for Fast, Resilient DevOps
This presentation addresses modern supply chain security in cloud-native engineering organizations, focusing on preventing incidents similar to SHA-1–related compromise events (e.g., “SHA1-Hulud”). Drawing from practical deployment experience with large PaaS providers, it outlines actionable mechanisms to ensure code integrity, artifact authenticity, and rapid detection and mitigation of malicious changes. Attendees will gain insights into securing CI/CD pipelines and maintaining rapid response capabilities without compromising development velocity. Emphasis is placed on aligning security practices with modern DevOps workflows to minimize risk while sustaining fast release cycles.
The Onion: Layered cyber security for corporations
Supply-chain attacks, red teaming, cyber resilience—these aren't buzzwords, they're your daily reality when your vendor's compromised server becomes your problem. In this talk, we'll dissect the real threats facing modern organizations, from sophisticated supply-chain infiltrations to the social engineering that bypasses your million-dollar security stack. You'll learn how to plan red team engagements that actually test your defenses against real-world attack scenarios, not just check compliance boxes. This isn't about passing audits—it's about building security that makes attackers move on to easier targets. Get ready for a rapid-fire dive into the mindset and methods that turn corporate networks from soft targets into hardened fortresses.
The Pentester’s Shift: From Executor to Operator
Should penetration testers and offensive security professionals fear that AI will render their roles obsolete? The answer isn't to resist, but to evolve. We are approaching a critical milestone where the role of the pentester shifts from a manual executor to a strategic operator.
The impact of AI is undeniable. I predict that within 2-5 years, manual external black-box penetration testing as we know it will no longer exist, replaced by deep automation and AI-driven workflows. Threat actors are already ahead of the curve; reports are already surfacing on how adversaries leverage AI to scale operations and automate external attacks.
But how far have we truly come? Beyond the hype, what is actually possible today? In this talk, I will move beyond theory and demonstrate a Proof of Concept (POC) AI agent. I will show how this agent automates the initial phases of an external test via OSINT gathering, and is then reconfigured to perform autonomous privilege escalation on a Linux machine.
Token Takeover: Anatomy of an Authentication System Collapse — Real-World Password Reset Misbinding (IDOR) & Multi-Domain XSS Token Theft Case Study
This presentation analyzes two real-world, high-impact vulnerabilities that led to full authentication system compromise: a Password Reset Token Misbinding flaw (IDOR) resulting in a $55,000 CEO account takeover, and a multi-domain XSS attack on the OneID authentication platform that enabled cross-origin token theft with physical safety implications.
The first case study demonstrates how a single unvalidated parameter (email) in the password reset flow allowed attackers to hijack any user account by re-binding a valid token to a victim’s email. The vulnerability required no MFA bypass and exposed financial assets at scale.
The second case study covers how an unsanitized parameter (originalUrl) combined with allowed javascript: scheme execution enabled remote script loading, token exfiltration, and full takeover across multiple global domains, including access to live location, vehicle lock/unlock functions, and user identity data.
The talk breaks down exploitation methodology, root-cause analysis, weak architectural patterns, and defensive strategies that could have prevented the collapse of both authentication systems. Attendees will gain practical insights into validating parameters, enforcing strict token binding, eliminating javascript: injection vectors, and hardening storage of authentication tokens.
Unmasking the Shadows: Advanced Techniques for Dark Web Domain Deanonymization
The Dark Web’s promise of anonymity through technologies like Tor has long been considered its most defining characteristic—and its greatest shield for malicious actors. However, sophisticated adversaries, law enforcement agencies, and security researchers have developed increasingly advanced methodologies to pierce this veil of anonymity. This presentation will provide a comprehensive technical deep-dive into the operational techniques, methods, and procedures (TTPs) used to deanonymize Dark Web domains and their operators.
Drawing from real-world case operations, OSINT investigations, and cutting-edge research, this talk will explore the full spectrum of deanonymization vectors—from passive traffic analysis and timing correlation attacks to active fingerprinting techniques and operational security failures. Attendees will gain insight into how seemingly minor OPSEC mistakes, infrastructure misconfigurations, and behavioral patterns can cascade into complete identity exposure. We’ll examine the technical architecture of anonymity networks, identifying inherent weaknesses and attack surfaces that can be exploited.
This session is designed for penetration testers, threat intelligence analysts, red teamers, and security researchers who need to understand both offensive deanonymization capabilities and defensive countermeasures. By understanding how anonymity fails, defenders can better architect resilient infrastructure, while investigators can develop more effective methodologies for tracking threat actors. Attendees will leave with actionable knowledge, practical tools, and a realistic understanding of Dark Web anonymity’s true boundaries in 2025.
Vsebina v pripravi
Vsebina v pripravi
Vsebina v pripravi
Zasebnost v dobi telemetrije, oblaka in regulirane digitalne prihodnosti
Predavanje obravnava, kako sodobni operacijski sistemi (Windows, macOS, deloma tudi Linux) zbirajo telemetrične podatke ter kakšen vpliv imajo oblačne storitve, kot sta Google Drive, OneDrive, iCloud in ostali, na zasebnost uporabnikov.
Osvetli nove izzive, ki jih prinaša AI-učenje podatkov v oblaku, ter razloži, kako lahko algoritmi dostopajo do dokumentov, fotografij in drugih zasebnih informacij.
V zaključku se predavanje dotakne aktualnih evropskih regulativ, kot sta Chat Control in preverjanje starosti na spletu, ter predstavi, kako te pobude vplivajo na svobodo, zasebnost in digitalne pravice posameznika.


