The trinity1 of Offensive Security archetypes Link to heading

Image of a pentester, bug bounty hunter, and CTF player

During the first part of my IT career, I was primarily focused on building systems and developing new ideas. It was only in the second phase that I really dared to dive deeper into the rabbit hole of offensive security. That was several years ago now. And the longer I stayed with it, the clearer it became to me that the differences are less technical in nature and much more about the way you work.

What shaped me most in pentesting is the structure. You work within a clearly defined framework, with scope, time constraints, and a certain amount of delivery pressure. The goal is to assess a system in a way that is understandable and as complete as possible. This systematic approach also changes the way you think, more along the lines of: “What can be tested efficiently, and what is actually relevant?”

CTFs had exactly the opposite effect on me. They are not about coverage, but about solving one specific problem. Often, you spend a long time on a single challenge, trying to find the one decisive angle. That environment forces you to go deeper technically, to truly understand things, and to think in unconventional ways. At the same time, it also became clear to me how heavily abstracted those problems are. Many aspects of real-world systems only play a minor role there, even though there are good CTF examples that are very close to practice or even reflect a real case.

Bug Bounty, in turn, feels completely different from the things mentioned before. There is no clear plan, no fixed process, and often no defined endpoint. Instead, you work through a system iteratively, follow hypotheses, and react to what you discover along the way. Over time, you notice how important pattern recognition becomes: you spot certain classes of mistakes faster, and some situations simply “feel suspicious.” This is less of a linear process and more a mixture of experience, intuition, and persistence.

What I find especially interesting in this breakdown is the fact that the same technical foundations lead to completely different ways of working in these three contexts. It is not that you need entirely different skills, but rather that you prioritize and apply the same skills differently. Looking back, I would say that each of these disciplines changed the way I approach problems. Pentesting gave me structure, CTFs sharpened my technical depth, and bug bounty taught me to think more openly and exploratorily. For me, it’s the combination of all three that creates a truly complete picture.

AI - The elephant in the room [2026 spring edition] Link to heading

Now for the uncomfortable question that has been on my mind for quite a while: “What happens to these skill sets when a large part of the actual work is increasingly taken over by AI?”

Even today, you can see how good modern systems are at identifying typical vulnerabilities, suggesting exploits, or even automating complete analysis paths. Things that used to require a deep understanding of protocols, memory architectures, or application logic are increasingly being abstracted away by tools. Access becomes easier, but at the same time, the way people learn is shifting.

In my view, the problem is not that AI is taking over tasks. In many areas, that is sensible and efficient. What is more critical is what may be lost in the process: the pressure to truly understand problems at a fundamental level.

Especially in areas like CTFs or exploit development, understanding often emerges through failure. You spend hours analyzing a situation, discard approaches, and gradually build a mental model of the system. This process is slow and sometimes frustrating, but that is exactly where its value lies. If a machine can produce a working solution in seconds, a large part of that path disappears. In the long run, this could shift the overall competence profile. Instead of deep technical understanding, it becomes more important to ask the right questions, use tools efficiently, and interpret results correctly. That is not inherently worse, but it is something different. The risk is that the ability to truly understand systems from the ground up becomes rarer.

This becomes especially clear when comparing the three areas described above. CTFs thrive on technical depth, bug bounty on experience and intuition, and pentesting on structured analysis. If AI takes over large parts of the operational work in all three areas, the question remains how these abilities are supposed to develop, especially for beginners. Anyone who is never forced to debug a buffer overflow themselves or manually dissect a complex authentication flow may know these concepts, but they will not understand them in the same way. What emerges is a form of “functional knowledge”. You know that something works, but not necessarily why.

At the same time, this development should not be viewed only negatively. AI can also be a powerful learning tool, provided it is used consciously. The difference lies in whether it serves as a shortcut or as support for understanding. Still, a certain doubt remains. Many of the abilities that are considered deep expertise today emerged historically because there were no shortcuts. If those shortcuts become the norm, it is entirely conceivable that future generations will learn differently, and possibly with less depth.

Whether that is a problem or simply a natural evolution is hard to judge conclusively. Only one thing is certain: the way we learn and practice security is fundamentally changing right now.


  1. About trinity: I would probably refer to it as a tetrad rather than a trinity, since there is also the Red Teamer archetype. However, as I don’t have a personal story to share about that role, I decided to stick with a trinity. Not that red teamers should feel excluded - your work is highly valuable, and your skill set is definitely something to admire! ↩︎