Skip to main content

Command Palette

Search for a command to run...

Kubernetes Runtime Security. De una curiosidad a un flujo de Seguridad inteligente.

Como saber y actuar, en consecuencia, en base a lo que pasa en nuestro Cluster.

Updated
11 min read
Kubernetes Runtime Security. De una curiosidad a un flujo de Seguridad inteligente.

Introducción

Todos hemos vivido ese momento: alguien abre un pod en producción, ejecuta un printenv y revisa una variable sensible. A veces es por curiosidad, otras por necesidad técnica… pero ¿y si no fue un desarrollador? ¿y si fue una cuenta comprometida?

Aquí es donde Falco 🦅 entra en escena.

Vamos a crear un escenario simple pero poderoso:
Un desarrollador curioso inspecciona una variable de entorno en un contenedor de producción. Falco detecta el evento en tiempo real y lo envía a Wazuh, donde lo enriquecemos, clasificamos y correlacionamos. Desde ahí, disparamos acciones automáticas con N8N.

¿Qué acciones? Las que queramos:

🔄 Rotar credenciales en Vault
📦 Regenerar Pods afectados
📑 Extraer registros para análisis forense
🔔 Notificar por Slack o Teams

Lo interesante es que no perseguimos al developer: aprovechamos la señal para fortalecer la seguridad.

Falco supervisa el comportamiento en tiempo real dentro de Kubernetes. Wazuh examina el contexto y clasifica el evento. N8N coordina la respuesta automática. De una simple curiosidad, surge un flujo de seguridad inteligente.

Falco

Vamos a crear el archivo de configuración de falco falco-values-complete.yaml, para luego instalar con Helm.

falco:
  json_output: true
  json_include_output_property: true
  json_include_tags_property: true

  grpc:
    enabled: true
    bind_address: "0.0.0.0:5060"

  grpc_output:
    enabled: true

driver:
  kind: modern_ebpf

falcosidekick:
  enabled: true
  replicaCount: 2
  hostNetwork: true
  dnsPolicy: ClusterFirstWithHostNet

  config:
    syslog:
      host: "192.168.0.67"  # ← IP CORRECTA
      port: "514"
      protocol: "udp"
      format: "json"
      minimumpriority: "notice"

customRules:
  secrets-detection.yaml: |-
    - rule: Read Sensitive File Untrusted
      desc: Detect attempts to read sensitive system files
      condition: >
        open_read and
        container and
        fd.name in (/etc/shadow, /etc/sudoers, /etc/pam.conf)
      output: >
        Sensitive file opened for reading
        (file=%fd.name command=%proc.cmdline user=%user.name
        container=%container.name k8s_pod=%k8s.pod.name k8s_ns=%k8s.ns.name)
      priority: WARNING
      tags: [filesystem, mitre_credential_access, T1555]

    - rule: Read Kubernetes Secret File
      desc: Detect read operations on Kubernetes secret files
      condition: >
        open_read and
        container and
        fd.name startswith "/etc/secrets/"
      output: >
        Kubernetes secret file accessed
        (file=%fd.name command=%proc.cmdline user=%user.name
        container=%container.name k8s_pod=%k8s.pod.name k8s_ns=%k8s.ns.name)
      priority: WARNING
      tags: [kubernetes, secrets, T1552.007, mitre_credential_access]

    - rule: Read Application Secret Files
      desc: Detect unauthorized access to application secret files
      condition: >
        open_read and
        container and
        fd.name startswith "/tmp/app-config/"
      output: >
        Application secret file accessed
        (file=%fd.name command=%proc.cmdline user=%user.name
        container=%container.name k8s_pod=%k8s.pod.name k8s_ns=%k8s.ns.name)
      priority: WARNING
      tags: [filesystem, application, mitre_credential_access, T1552, secrets]

    - rule: Environment Variables Dumped
      desc: Detect attempts to dump environment variables
      condition: >
        spawned_process and
        container and
        proc.name in (printenv, env)
      output: >
        Environment variables dumped
        (command=%proc.cmdline user=%user.name
        container=%container.name k8s_pod=%k8s.pod.name k8s_ns=%k8s.ns.name)
      priority: NOTICE
      tags: [process, mitre_credential_access, T1552.007]

    - rule: Environment Variables Dumped in Production
      desc: Detect env dumps in production namespace
      condition: >
        spawned_process and
        container and
        k8s.ns.name = "production" and
        proc.name in (env, printenv)
      output: >
        Environment variables accessed in production
        (command=%proc.cmdline user=%user.name
        container=%container.name k8s_pod=%k8s.pod.name k8s_ns=%k8s.ns.name)
      priority: WARNING
      tags: [kubernetes, secrets, production, T1552.007, mitre_credential_access]

    - rule: Read ServiceAccount Token
      desc: Detect reading of Kubernetes ServiceAccount token
      condition: >
        open_read and
        container and
        fd.name startswith "/var/run/secrets/kubernetes.io/serviceaccount/token"
      output: >
        ServiceAccount token accessed
        (file=%fd.name command=%proc.cmdline user=%user.name
        container=%container.name k8s_pod=%k8s.pod.name k8s_ns=%k8s.ns.name)
      priority: WARNING
      tags: [kubernetes, credentials, mitre_credential_access, T1552.007]

resources:
  requests:
    cpu: 100m
    memory: 512Mi
  limits:
    cpu: 1000m
    memory: 1024Mi

No te olvides de poner la dirección de tu Wazuh.

Ahora instalamos el Helm.

helm install falco falcosecurity/falco \
  --namespace falco \
  -f falco-values-complete.yaml

Wazuh

No me voy a poner a explicar el uso de Wazuh, ya lo he tocado en otros post’s. Pero si les voy a pegar lo mas importante de esta configuración.

Primero el decoder en /var/ossec/etc/decoders/falco-decoder.xml.

# /var/ossec/etc/decoders/falco-decoder.xml
<decoder name="falco">
  <prematch>Falco</prematch>
</decoder>

Segundo las reglas en /var/ossec/etc/rules/falco-rules.xml.

<group name="falco,">

  <!-- Regla base - IMPORTANTE: decoded_as debe ser "falco-json" -->
  <rule id="100600" level="0">
    <decoded_as>falco-json</decoded_as>
    <description>Falco: Runtime security event</description>
  </rule>

  <!-- Por prioridad -->
  <rule id="100603" level="8">
    <if_sid>100600</if_sid>
    <match>"priority":"Warning"</match>
    <description>Falco Warning Alert</description>
    <group>falco,warning,</group>
  </rule>

  <rule id="100605" level="12">
    <if_sid>100600</if_sid>
    <match>"priority":"Critical"</match>
    <description>Falco Critical Alert</description>
    <group>falco,critical,</group>
  </rule>

  <!-- PRODUCCIÓN - Máxima prioridad -->
  <rule id="100610" level="15">
    <if_sid>100603,100605</if_sid>
    <match>"k8s.ns.name":"production"</match>
    <description>Falco CRITICAL: Alerta en namespace PRODUCTION</description>
    <group>falco,production,high_priority,</group>
  </rule>

  <!-- Detección de lectura de secretos montados (patrón en full_log) -->
  <rule id="100620" level="12">
    <if_sid>100600</if_sid>
    <match>"fd.name":"/etc/secrets/</match>
    <description>Falco: Acceso a secreto K8s montado</description>
    <group>falco,credential_access,secrets,</group>
    <mitre>
      <id>T1552.007</id>
    </mitre>
  </rule>

  <!-- Lectura de /etc/shadow -->
  <rule id="100621" level="12">
    <if_sid>100600</if_sid>
    <match>"fd.name":"/etc/shadow"</match>
    <description>Falco: Lectura de /etc/shadow</description>
    <group>falco,credential_access,</group>
    <mitre>
      <id>T1552.001</id>
    </mitre>
  </rule>

  <!-- Acceso a secretos en PRODUCCIÓN - CRÍTICO -->
  <rule id="100625" level="15">
    <if_sid>100620</if_sid>
    <match>"k8s.ns.name":"production"</match>
    <description>Falco CRITICAL: Developer accediendo a secretos en PRODUCCIÓN</description>
    <group>falco,production,credential_access,unauthorized_access,</group>
    <mitre>
      <id>T1552.007</id>
    </mitre>
  </rule>

  <!-- Detección por nombre de regla de Falco - Read Kubernetes Secret File -->
  <rule id="100640" level="14">
    <if_sid>100600</if_sid>
    <match>Kubernetes secret file accessed</match>
    <description>Falco: Lectura de archivo de secreto K8s detectada</description>
    <group>falco,kubernetes,secrets,credential_access,</group>
    <mitre>
      <id>T1552.007</id>
    </mitre>
  </rule>

  <!-- Variables de entorno dumpeadas en producción -->
  <rule id="100641" level="12">
    <if_sid>100600</if_sid>
    <match>Environment variables accessed in production</match>
    <description>Falco: Variables de entorno accedidas en producción</description>
    <group>falco,kubernetes,production,credential_access,</group>
    <mitre>
      <id>T1552.007</id>
    </mitre>
  </rule>

  <!-- Combinada: Secret file en producción con regla custom de Falco -->
  <rule id="100650" level="16">
    <if_sid>100640</if_sid>
    <match>"k8s.ns.name":"production"</match>
    <description>Falco CRITICAL: Archivo de secreto K8s accedido en PRODUCCIÓN - Pod: webapp</description>
    <group>falco,production,kubernetes,credential_access,critical,</group>
    <mitre>
      <id>T1552.007</id>
    </mitre>
  </rule>

</group>

Aquí está la integración.

No olvides incluir el puerto UDP 514 en ossec.conf y esta porción:

  <active-response>
    <command>n8n-webhook</command>
    <location>server</location>
    <rules_group>falco</rules_group>
    <level>12</level>
  </active-response>

Incident and Response

Aquí dejo el script que realiza la magia detrás de escena, para el análisis y la llamada a cada webhook junto con su wrapper. Este Python /var/ossec/active-response/bin/parse-and-send.py.

#!/usr/bin/env python3
import sys
import json
import re
import subprocess
import os
import time
from datetime import datetime

# URLs de N8N
N8N_TRIAGE = "http://192.168.0.12:5678/webhook/falco-triage"
N8N_FORENSICS = "http://192.168.0.12:5678/webhook/falco-forensics"
N8N_CONTAINMENT = "http://192.168.0.12:5678/webhook/falco-containment"

# Directorio de evidencia
FORENSICS_DIR = "/var/ossec/logs/forensics"

# Ruta completa de kubectl (para forensics)
KUBECTL = "/usr/local/bin/kubectl"

try:
    input_data = sys.stdin.read().strip()

    # Extraer campos con regex
    alert_id_match = re.search(r'"id":"([^"]+)"', input_data)
    rule_id_match = re.search(r'"rule":\{[^}]*"id":"([^"]+)"', input_data)
    rule_level_match = re.search(r'"level":(\d+)', input_data)
    rule_desc_match = re.search(r'"description":"([^"]+)"', input_data)
    timestamp_match = re.search(r'"timestamp":"([^"]+)"', input_data)

    full_log_match = re.search(r'"full_log":"(.+?)","decoder"', input_data)
    full_log = full_log_match.group(1) if full_log_match else ""

    # Parsear Falco JSON
    falco_data = {}
    falco_match = re.search(r'Falco\[\d+\]:\s*(\{.+\})', full_log)
    if falco_match:
        falco_json_str = falco_match.group(1).replace('\\', '')
        try:
            falco_data = json.loads(falco_json_str)
        except:
            pass

    # Extraer MITRE
    mitre_ids = re.findall(r'"mitre":\{[^}]*"id":\[([^\]]+)\]', input_data)
    mitre_id = mitre_ids[0].replace('"', '').strip() if mitre_ids else 'N/A'

    rule_id = rule_id_match.group(1) if rule_id_match else 'N/A'
    rule_level = int(rule_level_match.group(1)) if rule_level_match else 0

    namespace = falco_data.get('output_fields', {}).get('k8s.ns.name', 'unknown')
    pod = falco_data.get('output_fields', {}).get('k8s.pod.name', 'unknown')

    # Payload común
    payload = {
        'alert_id': alert_id_match.group(1) if alert_id_match else 'N/A',
        'rule_id': rule_id,
        'rule_level': str(rule_level),
        'rule_description': rule_desc_match.group(1) if rule_desc_match else 'N/A',
        'timestamp': timestamp_match.group(1) if timestamp_match else '',
        'k8s_namespace': namespace,
        'k8s_pod': pod,
        'container_id': falco_data.get('output_fields', {}).get('container.id', 'unknown'),
        'command': falco_data.get('output_fields', {}).get('proc.cmdline', 'unknown'),
        'file_accessed': falco_data.get('output_fields', {}).get('fd.name', 'unknown'),
        'user': falco_data.get('output_fields', {}).get('user.name', 'root'),
        'falco_rule': falco_data.get('rule', 'N/A'),
        'falco_priority': falco_data.get('priority', 'N/A'),
        'mitre_id': mitre_id
    }

    print(f"Processing alert - Rule: {rule_id}, Level: {rule_level}, NS: {namespace}, Pod: {pod}")

    # ============================================================================
    # 1. TRIAGE - Siempre enviar
    # ============================================================================
    result = subprocess.run(
        ['curl', '-X', 'POST', N8N_TRIAGE,
         '-H', 'Content-Type: application/json',
         '-d', json.dumps(payload),
         '--max-time', '10'],
        capture_output=True, text=True
    )
    print(f"TRIAGE - Sent to Slack")
    time.sleep(2)

    # ============================================================================
    # 2. FORENSICS - Para alertas level >= 12
    # ============================================================================
    if rule_level >= 12 and namespace != 'unknown' and pod != 'unknown':
        print(f"FORENSICS - Collecting evidence for {namespace}/{pod}")

        os.makedirs(FORENSICS_DIR, exist_ok=True)
        timestamp_str = datetime.now().strftime('%Y%m%d_%H%M%S')
        evidence_file = f"{FORENSICS_DIR}/{namespace}_{pod}_{timestamp_str}.txt"

        with open(evidence_file, 'w') as f:
            f.write("=" * 80 + "\n")
            f.write("FORENSICS REPORT - FALCO SECURITY INCIDENT\n")
            f.write("=" * 80 + "\n\n")

            f.write(f"Alert ID: {payload['alert_id']}\n")
            f.write(f"Timestamp: {payload['timestamp']}\n")
            f.write(f"Rule: {rule_id} (Level {rule_level})\n")
            f.write(f"Description: {payload['rule_description']}\n")
            f.write(f"MITRE ATT&CK: {mitre_id}\n\n")

            f.write(f"Namespace: {namespace}\n")
            f.write(f"Pod: {pod}\n")
            f.write(f"Container ID: {payload['container_id']}\n")
            f.write(f"User: {payload['user']}\n\n")

            f.write(f"Suspicious Command: {payload['command']}\n")
            f.write(f"File Accessed: {payload['file_accessed']}\n")
            f.write(f"Falco Rule: {payload['falco_rule']}\n\n")

            f.write("=" * 80 + "\n")
            f.write("POD LOGS (last 100 lines)\n")
            f.write("=" * 80 + "\n\n")

            try:
                logs = subprocess.run(
                    [KUBECTL, 'logs', pod, '-n', namespace, '--tail=100'],
                    capture_output=True, text=True, timeout=15
                )
                f.write(logs.stdout if logs.returncode == 0 else f"Error: {logs.stderr}\n")
            except Exception as e:
                f.write(f"Error capturing logs: {e}\n")

            f.write("\n" + "=" * 80 + "\n")
            f.write("POD DESCRIPTION\n")
            f.write("=" * 80 + "\n\n")

            try:
                describe = subprocess.run(
                    [KUBECTL, 'describe', 'pod', pod, '-n', namespace],
                    capture_output=True, text=True, timeout=15
                )
                f.write(describe.stdout if describe.returncode == 0 else f"Error: {describe.stderr}\n")
            except Exception as e:
                f.write(f"Error describing pod: {e}\n")

            f.write("\n" + "=" * 80 + "\n")
            f.write("POD YAML MANIFEST\n")
            f.write("=" * 80 + "\n\n")

            try:
                yaml_out = subprocess.run(
                    [KUBECTL, 'get', 'pod', pod, '-n', namespace, '-o', 'yaml'],
                    capture_output=True, text=True, timeout=15
                )
                f.write(yaml_out.stdout if yaml_out.returncode == 0 else f"Error: {yaml_out.stderr}\n")
            except Exception as e:
                f.write(f"Error getting YAML: {e}\n")

            f.write("\n" + "=" * 80 + "\n")
            f.write("END OF REPORT\n")
            f.write("=" * 80 + "\n")

        subprocess.run(['gzip', '-f', evidence_file])
        evidence_file_gz = f"{evidence_file}.gz"

        print(f"FORENSICS - Evidence saved: {evidence_file_gz}")

        forensics_payload = payload.copy()
        forensics_payload['evidence_file'] = evidence_file_gz
        forensics_payload['evidence_collected'] = True

        result = subprocess.run(
            ['curl', '-X', 'POST', N8N_FORENSICS,
             '-H', 'Content-Type: application/json',
             '-d', json.dumps(forensics_payload),
             '--max-time', '10'],
            capture_output=True, text=True
        )
        print(f"FORENSICS - Notification sent to Slack")
        time.sleep(2)

    # ============================================================================
    # 3. CONTAINMENT - Delegado a N8N
    # ============================================================================
    CONTAINMENT_RULES = ['100625', '100650']
    is_production = namespace == 'production'
    is_vault_enabled = 'vault' in pod.lower()
    is_secret_access = (
        '/tmp/app-config/' in payload['file_accessed'] or
        '/etc/secrets/' in payload['file_accessed']
    )

    should_contain = (
        rule_id in CONTAINMENT_RULES or
        (is_production and is_vault_enabled and is_secret_access and rule_level >= 12)
    )

    if should_contain:
        print(f"CONTAINMENT - Delegating to N8N for {namespace}/{pod}")
        print(f"  - Production: {is_production}")
        print(f"  - Vault-enabled: {is_vault_enabled}")
        print(f"  - Secret access: {is_secret_access}")
        print(f"  - Rule: {rule_id}, Level: {rule_level}")

        # Enviar a N8N - él se encarga de todo
        containment_payload = payload.copy()
        containment_payload['action_required'] = 'ROTATE_AND_KILL'
        containment_payload['trigger_reason'] = f"Rule {rule_id} - Level {rule_level}"

        result = subprocess.run(
            ['curl', '-X', 'POST', N8N_CONTAINMENT,
             '-H', 'Content-Type: application/json',
             '-d', json.dumps(containment_payload),
             '--max-time', '10'],
            capture_output=True, text=True
        )
        print(f"CONTAINMENT - Delegated to N8N workflow")
        print(f"  N8N will: 1) Rotate secret in Vault, 2) Delete pod via K8s API")

    sys.exit(0)

except Exception as e:
    print(f"ERROR: {str(e)}")
    import traceback
    traceback.print_exc()
    sys.exit(1)

Y el Wrapper en /var/ossec/active-response/bin/n8n-webhook.sh.

#!/bin/bash

LOCAL=`dirname $0`
cd $LOCAL
PWD=`pwd`

read INPUT_JSON

mkdir -p /var/ossec/logs/active-response

echo "$(date '+%Y-%m-%d %H:%M:%S') - Processing alert" >> /var/ossec/logs/active-response/n8n-webhook.log

# Llamar al script Python que hace todo
RESULT=$(echo "$INPUT_JSON" | /var/ossec/active-response/bin/parse-and-send.py 2>&1)

echo "$RESULT" >> /var/ossec/logs/active-response/n8n-webhook.log
echo "---" >> /var/ossec/logs/active-response/n8n-webhook.log

exit 0

N8N

Voy a usar Kubectl para acceder.

kubectl port-forward -n automation svc/n8n 5678:5678 --address=0.0.0.0

Vamos a importar el flujo de trabajo para obtener estos 3 caminos. El Triage nos avisa del compromiso, Forensic obtiene los registros para la investigación y Containment se encarga de rotar el secreto en Vault, eliminar el Pod y darnos el estado. Te dejo el workflow para que importes.

Para que N8N pueda comunicarse con Kubernetes, crearemos una cuenta de servicio que solo tenga permisos para eliminar Pods, de esta manera obtendremos el Bearer necesario para usar.

apiVersion: v1
kind: ServiceAccount
metadata:
  name: n8n-incident-response
  namespace: automation
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: n8n-incident-response
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "list", "delete"]
- apiGroups: [""]
  resources: ["pods/log"]
  verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: n8n-incident-response
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: n8n-incident-response
subjects:
- kind: ServiceAccount
  name: n8n-incident-response
  namespace: automation
---
apiVersion: v1
kind: Secret
metadata:
  name: n8n-incident-response-token
  namespace: automation
  annotations:
    kubernetes.io/service-account.name: n8n-incident-response
type: kubernetes.io/service-account-token

Te dejo los comandos para poder listar el token.

TOKEN=$(kubectl -n automation get secret n8n-incident-response-token -o jsonpath='{.data.token}' | base64 --decode)
kubectl -n automation get secret n8n-incident-response-token -o jsonpath='{.data.ca\.crt}' | base64 --decode > ca.crt
APISERVER=$(kubectl config view --minify -o jsonpath='{.clusters[0].cluster.server}')

echo Bearer $TOKEN
Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IjFiaG1pd19KOHJJYnZkbWlhSkptdWl3dHNXZEFmMW9WeWZIRWp1NS0xVW8ifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJhdXRvbWF0aW9uIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6Im44bi1pbmNpZGVudC1yZXNwb25zZS10b2tlbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJuOG4taW5jaWRlbnQtcmVzcG9uc2UiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI0MzJlYTQ2NC0yMmIwLTQ3ZjktYTE2NS01ZGY3YTgxYzM2OTEiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6YXV0b21hdGlvbjpuOG4taW5jaWRlbnQtcmVzcG9uc2UifQ.gMa6wGi0SpTrfRYLD5DoOUhH3I19U2v0_brgQaWGtYoOBJyQaavBQOijV4yad6My2theSQVoRVHDseO_pYKBLuc3MhggOxaN2dOLGe3oB3kKQ3TGeTxFlIWYV9tBH7_5SBLoASfuet4frimfkL03sgb5lYx93IdSgMDewQGA_RVVH0McVOwignR43KHARYqwgraqwAjPaD9hdEf5Y3i7v0hPhgln-gBc42B7q3lYCjXyjFCtvictPJ2813AhMNoKy2aLVACjsc9UDfZy1e4yV9ZQQ3QCQyfjPTLq2XXW_ob1sJGl2jmmcu73i23ZXuaVCgi-5zm5gKGlh0bdchWP3A

Creamos la credencial para el Kubernetes HTTP Request Delete Pod.

Tienes que añadir el token para el bot de Slack también.

Vault

Vamos a instalar Vault en el namespace automation.

# Agregar repo de Vault (si no lo tenés)
helm repo add hashicorp https://helm.releases.hashicorp.com
helm repo update

# Instalar Vault en modo DEV (para POC)
cat > vault-values.yaml << 'EOF'
server:
  dev:
    enabled: true
    devRootToken: "root"

  standalone:
    enabled: true

  service:
    type: NodePort
    nodePort: 30200

  dataStorage:
    enabled: false

ui:
  enabled: true
  serviceType: NodePort

injector:
  enabled: false
EOF

helm install vault hashicorp/vault \
  -f vault-values.yaml \
  -n automation --create-namespace

# Esperar
kubectl wait --for=condition=ready pod -l app.kubernetes.io/name=vault -n automation --timeout=300s

# Verificar
kubectl get pod -n automation -l app.kubernetes.io/name=vault

Ahora generamos el secreto, para que sea consumido.

# Port forward para acceder a Vault UI
kubectl port-forward -n automation svc/vault 8200:8200 &

# Acceder a Vault
export VAULT_ADDR='http://127.0.0.1:8200'
export VAULT_TOKEN='root'

# O desde el pod
kubectl exec -it vault-0 -n automation -- vault login root

# Habilitar KV v2 secrets engine
kubectl exec -it vault-0 -n automation -- vault secrets enable -path=secret kv-v2

# Crear secreto inicial para producción
kubectl exec -it vault-0 -n automation -- vault kv put secret/production/db-credentials \
  username=admin \
  password=InitialSecretP@ssw0rd123 \
  api_key=sk-prod-initial-key \
  version=1

Aplicación

Now it's time for the application to read the password from Vault. The webapp-vault pod will be created in the production namespace. Remember that many of our rules use the namespace as a condition.

apiVersion: v1
kind: ConfigMap
metadata:
  name: vault-reader-script
  namespace: production
data:
  read-secrets.sh: |
    #!/bin/sh

    VAULT_ADDR="http://vault.automation.svc.cluster.local:8200"
    VAULT_TOKEN="root"
    SECRET_PATH="secret/data/production/db-credentials"

    echo "============================================"
    echo "🔐 VAULT-ENABLED APPLICATION"
    echo "============================================"
    echo "Vault: $VAULT_ADDR"
    echo "Pod: $(hostname)"
    echo "Timestamp: $(date)"
    echo ""

    echo "📡 Connecting to Vault..."
    SECRET_JSON=$(wget -q -O - \
      --header "X-Vault-Token: $VAULT_TOKEN" \
      "$VAULT_ADDR/v1/$SECRET_PATH")

    if [ $? -ne 0 ]; then
      echo "❌ Failed to connect to Vault"
      sleep 30
      exit 1
    fi

    DB_USERNAME=$(echo "$SECRET_JSON" | sed -n 's/.*"username":"\([^"]*\)".*/\1/p')
    DB_PASSWORD=$(echo "$SECRET_JSON" | sed -n 's/.*"password":"\([^"]*\)".*/\1/p')
    API_KEY=$(echo "$SECRET_JSON" | sed -n 's/.*"api_key":"\([^"]*\)".*/\1/p')
    SECRET_VERSION=$(echo "$SECRET_JSON" | sed -n 's/.*"version":\([0-9]*\).*/\1/p')

    if [ -z "$DB_PASSWORD" ]; then
      echo "❌ Failed to parse secrets"
      exit 1
    fi

    echo "✅ Secrets loaded successfully!"
    echo ""
    echo "📊 Configuration:"
    echo "  Username: $DB_USERNAME"
    echo "  Password: ${DB_PASSWORD:0:4}***${DB_PASSWORD: -3}"
    echo "  API Key: ${API_KEY:0:12}***"
    echo "  Vault Version: ${SECRET_VERSION}"
    echo ""

    mkdir -p /tmp/app-config
    echo "$DB_USERNAME" > /tmp/app-config/username
    echo "$DB_PASSWORD" > /tmp/app-config/password
    echo "$API_KEY" > /tmp/app-config/api-key
    echo "${SECRET_VERSION}" > /tmp/app-config/secret-version
    chmod 600 /tmp/app-config/*

    echo "✅ Config files created at /tmp/app-config/"
    ls -la /tmp/app-config/
    echo ""
    echo "============================================"
    echo "🚀 APPLICATION RUNNING"
    echo "============================================"
    echo "Using Vault secret version: ${SECRET_VERSION}"
    echo ""

    COUNTER=0
    while true; do
      COUNTER=$((COUNTER + 1))
      echo "[$(date '+%H:%M:%S')] Heartbeat #$COUNTER - Vault version ${SECRET_VERSION}"
      sleep 30
    done
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: webapp-vault
  namespace: production
  labels:
    app: webapp-vault
spec:
  replicas: 1
  selector:
    matchLabels:
      app: webapp-vault
  template:
    metadata:
      labels:
        app: webapp-vault
        env: production
        vault-enabled: "true"
    spec:
      containers:
      - name: webapp
        image: busybox
        command: ["/bin/sh", "/scripts/read-secrets.sh"]
        volumeMounts:
        - name: scripts
          mountPath: /scripts
      volumes:
      - name: scripts
        configMap:
          name: vault-reader-script
          defaultMode: 0755

Simulación de Compromiso

Ahora la parte que el atacante o el desarrollador curioso quiere leer una variable de entorno.

# 1. Obtener nombre del pod
POD=$(kubectl get pod -n production -l app=webapp-vault -o jsonpath='{.items[0].metadata.name}')

echo "Pod actual: $POD"

# 2. Simular compromiso
kubectl exec $POD -n production -- cat /tmp/app-config/password

Uala!

Mientas tanto en Wazuh.

Comandos Utiles

# Para ver los logs del Python
tail -f /var/ossec/logs/active-response/n8n-webhook.log
# Ver Alertas de Wazuh
tail -f /var/ossec/logs/alerts/alerts.log | grep -i "falco"

Referencias

https://blog.santiagoagustinfernandez.com/runtime-security-con-falco#heading-instalacion-de-falco-via-helm