Blackfort Technology – Information Security

Information Security —
from regulation to implementation.

Blackfort Technology combines regulatory expertise with technical security implementation and AI competence – for organisations that need security not just on paper.

Why Security Programmes Fail

The problem is not the regulation. It is the gap between concept and implementation.

Security programmes rarely fail because no concept or policy existed. They fail because a gap remains between the regulatory requirement and the technical reality – a gap that nobody closes. A gap analysis exists. The audit may already have been passed. The measures are documented. But little has changed in the systems.

Regulation prescribes “appropriate technical measures”. What this means concretely – for a telecommunications infrastructure, a production facility, or a cloud-native SaaS platform – rarely follows directly from the legislative text. What is required is regulatory classification, architectural knowledge, and operational implementation expertise.

Blackfort Technology operates precisely at this interface. We do not develop concepts filed away for the auditor. We accompany you from analysis through to implementation – in the same projects, with the same teams.

NIS2 · DORA · CRA
Regulatory Expertise
since 2017
Bonn, Germany
Allianz für Cyber-Sicherheit – Partner

Partner of the Alliance for Cyber Security (BSI)

References

Selected clients.

Horváth AG
Variolytics GmbH
LapID Service GmbH
SUMUS Software
Aurum Consulting
RED Global
aquatune GmbH
3steps2web
Vuca Simulations
DESH Datenservice
Reimbold Immobilien
Praxis Dr. Ksendsowski
Zahnfee
Bullets Playing Cards

Competency Areas

Three areas. One consultancy.

Most IT security consultancies are strong in one of these areas. We cover all three – and connect them in projects that are successfully and sustainably implemented.

01Compliance & Governance

Understanding, prioritising, and translating regulatory requirements into measures.

The regulatory landscape has changed fundamentally over the past three years. NIS2 significantly expands the circle of obligated organisations and raises requirements for security measures, reporting obligations, and governance structures. DORA imposes binding ICT resilience requirements across the entire financial sector. The Cyber Resilience Act obligates product manufacturers to implement security by design.

These requirements cannot be met through one-off audits or ticked checklists. They require an information security management system that is genuinely lived – with clear responsibilities, documented processes, and the ability to respond to operational changes.

We accompany organisations from the first gap analysis through to certification. This includes building ISMS structures, developing security concepts and policies, preparing for external audits, and ongoing support as an external information security officer.

All Consulting Services
02Technical Implementation

Identifying vulnerabilities, hardening systems, building security infrastructure.

Information security is not a paper exercise. Every requirement from the regulatory framework must be reflected in a technical measure: a process, a system configuration, a monitoring rule, or a network segmentation.

In practice, we regularly encounter the same gaps: no systematic patch management, insufficient or inconsistent logging, an Active Directory that has accumulated attack surfaces over years, or missing controls over privileged access. These problems are known. Yet they remain unaddressed – because technical implementation goes beyond a classic consulting engagement.

We take on this technical implementation: vulnerability analysis and penetration testing, system hardening to CIS Benchmarks, Active Directory hardening against Pass-the-Hash and Kerberoasting, PKI deployment, SBOM management, and centralised logging with SIEM integration. Not just as a one-off project, but with sustainable integration into your processes.

Penetration Testing & Technical Security Solutions
03AI & AI Security

Developing, integrating, and operating AI systems securely and under control.

AI systems are no longer a future challenge. They are today integrated into production processes, customer interactions, and decision support systems. And they generate risks for which classical security approaches have no established answers.

Prompt injection attacks on language models cannot be repelled with a firewall ruleset. Opaque decision logic in regulated environments generates liability risks that neither DORA nor NIS2 explicitly address. The EU AI Act introduces new compliance requirements for high-risk AI systems, whose implementation requires technical documentation and traceability.

As a permanent member of the AI working group of the Alliance for Cyber Security (ACS/BSI) and lead authors of one of the first methodological guides to LLM pentesting in the German-speaking world, AI security is a dedicated competency at Blackfort. Our offering covers AI governance, technical security testing, LLM pentesting, and monitoring for AI-assisted processes in regulated environments.

AI Security at Blackfort Technology

How We Work

Our approach.

01

Assessment

What is technically in place? Which processes, documentation, and responsibilities exist? No assumptions – only facts.

02

Classification

Which regulatory requirements specifically apply to this organisation, this sector, this infrastructure? What are the actual priorities?

03

Measure Architecture

What must be implemented – technically and organisationally – and in what order? A realistic roadmap, not a wish list.

04

Implementation

We implement in your systems, with your teams, under your operating conditions. Not handed over – delivered together.

05

Operations & Continuity

Security is not a project deliverable. We remain involved as an external ISB, through regular reviews, or in ongoing operations.

Perspective

Why regulatory requirements and technical implementation belong together – and why this connection is missing.

When auditors require “logging and monitoring”, it means something different for an energy supplier with OT systems, for a SaaS platform in AWS, or for a hospital with medical devices. The requirement is formally the same. The technical answer is different every time.

This is the gap that many compliance programmes do not close: the consultancy interprets the regulatory requirements. The technical implementation remains with the internal IT team – or is deferred to another project.

The bridge is a structured security architecture review: an analysis of the existing IT landscape with the goal of deriving concrete measures – technically precise, regulatorily defensible, prioritised by actual risk and operational reality.

In our projects, the collaboration typically looks like this: on one side is the requirement from the audit or the authority. On the other side is the network diagram and the concrete technology stack. Our task is the translation – through to the implemented measure.

Typical Action Areas

Network Segmentation
NIS2, IEC 62443
Security Logging & SIEM
NIS2, BSI IT-Grundschutz, ISO 27001
Vulnerability Management
NIS2, CIS Controls, BSI
Identity & Access Management
ISO 27001, DORA
Incident Response
NIS2, DORA, BSI

AI Requires New Security Controls

Permissions & Data Access
What information can the AI system retrieve? What actions can it perform? Classical access control is insufficient.
Adversarial Inputs
Prompt injection, jailbreaking, indirect manipulation: LLMs must be specifically tested for exploitability.
Audit Trail & Traceability
What did the model decide, when? In regulated processes, traceability is not optional.
EU AI Act Compliance
High-risk AI systems require technical documentation, conformity assessment, and ongoing monitoring.

AI Security

AI needs security architecture. Not just policies.

The question is no longer whether organisations use AI. The question is whether they do so in a controlled manner. A language model with internal access to corporate data is a privileged system – without the explicit authorisation logic that a classical access control system would have.

An AI agent that executes actions in production systems is a new attack surface. An AI decision system in a regulated environment generates documentation obligations that not all compliance frameworks yet explicitly cover. These gaps are real – and they do not close themselves through an AI strategy alone.

We address these challenges from the perspective of security architecture and operations. What permissions does the AI system have? Which actions are audited? How is the model tested for adversarial inputs? What logging requirements arise from the EU AI Act?

These are not abstract questions. They determine whether an AI system is secure, traceable, and compliant in production.

AI Security at Blackfort Technology

Sector Expertise

Regulation and technology are sector-specific.

Each sector brings its own regulatory requirements, attack vectors, and operational realities. Our projects are contextualised accordingly – not generic.

Our Positioning

“Strategy consulting without IT architecture is like a chef without a kitchen.”
Christian Gebhardt, Founder & Managing Director, Blackfort Technology

Christian Gebhardt

Founder & Managing Director, Blackfort Technology

At Blackfort, consultants and security engineers work in the same projects. Recommendations are not produced in a workshop and then handed over – we are present from gap analysis through to the implemented measure.