Code Signing in Professional Software

AI In The Public Sector, Azure CAF & Cloud Migration, Resilience, Sovereignty Series 12th Jan 2026 Martin-Peter Lambert
Code Signing in Professional Software

Stop Git Impersonation, Strengthen Supply Chain Security, Meet US & EU Compliance

If you build software professionally, you don’t just need secure code—you need verifiable proof of who changed it and whether it was altered before release. Code Signing & Signed Commits play a crucial role in preventing Git impersonation and meeting US/EU compliance requirements such as NIS2, GDPR, and CRA. That’s why code signing (including Git signed commits) has become a baseline control for software supply chain security, DevSecOps, and compliance.

It also directly addresses a common risk: a developer (or attacker) committing code while pretending to be someone else. With unsigned commits, names and emails can be faked. With signed commits, identity becomes cryptographically verifiable.

This matters even more if you operate in the US and Europe, where cybersecurity requirements increasingly expect strong controls—and where the EU, in particular, attaches explicit, high penalties for non-compliance (NIS2, GDPR, and the Cyber Resilience Act). (EUR-Lex)

What is “code signing” (and what customers actually mean by it)?

In industry conversations, code signing usually means a chain of trust across your entire delivery pipeline:

  • Signed commits (Git commit signing): proves the author/committer identity for each change
  • Signed tags / signed releases: proves a release point (e.g., v2.7.0) wasn’t forged
  • Signed build artifacts: proves your binaries, containers, and packages weren’t tampered with
  • Signed provenance / attestations: proves what source + CI/CD pipeline produced the artifact (a growing expectation in supply chain security programs)

The goal is simple: integrity + identity + traceability from developer laptop to production.

Why signed commits prevent “commit impersonation”

Without signing, Git identity is just text. Anyone can set an author name/email to match a colleague and push code that looks legitimate.

Signed commits add a cryptographic signature that platforms can verify. When you enforce signed commits (especially on protected branches):

  • fake author names don’t pass verification
  • only commits signed by trusted keys are accepted
  • auditors and incident responders get a reliable attribution trail

In other words: Git commit signing is one of the cleanest ways to prevent developers (or attackers) from committing as someone else.

Code Signing = Better Security + Cleaner Audits

Customers in regulated industries (finance, critical infrastructure, healthcare, manufacturing, government vendors) frequently search for:

  • software supply chain security
  • CI/CD security controls
  • secure SDLC evidence
  • audit trail for code changes

Code signing helps because it creates durable evidence for:

  • change control (who changed what)
  • integrity (tamper-evidence)
  • accountability (strong attribution)
  • faster incident response and forensics

That’s why code signing is often positioned as a compliance accelerator: it reduces the cost and friction of proving good practices.

US Compliance View: Why Code Signing Supports Federal and Enterprise Security Requirements

In the US, the big push is secure software development and software supply chain assurance—especially for vendors selling into government and regulated sectors.

Executive Order 14028 + software attestations

Executive Order 14028 drove major follow-on guidance around supply chain security and secure software development expectations. (NIST)
OMB guidance (including updates like M-23-16) establishes timelines and expectations for collecting secure software development attestations from software producers. (The White House)
Procurement artifacts like the GSA secure software development attestation reflect this direction in practice. (gsa.gov)

NIST SSDF (SP 800-218) as the common language

Many organizations align their secure SDLC programs to the NIST Secure Software Development Framework (SSDF). (csrc.nist.gov)

Where code signing fits: it’s a practical control that supports identity, integrity, and traceability—exactly the kinds of things customers and auditors ask for when validating secure development practices.

(In the US, the “penalty” is often commercial: failed vendor security reviews, procurement blockers, contract risk, and higher liability after an incident—especially if your controls can’t be evidenced.)

EU Compliance View: NIS2, GDPR, and the Cyber Resilience Act (CRA) Penalties

Europe is where penalties become very concrete—and where customers increasingly ask vendors about NIS2 compliance, GDPR security, and Cyber Resilience Act compliance.

NIS2 penalties (explicit fines)

NIS2 includes an administrative fine framework that can reach:

  • Essential entities: up to €10,000,000 or 2% of worldwide annual turnover (whichever is higher)
  • Important entities: up to €7,000,000 or 1.4% of worldwide annual turnover (whichever is higher) (EUR-Lex)

Why code signing matters for NIS2 readiness: it supports strong controls around integrity, accountability, and change management—key building blocks for cybersecurity governance in professional environments.

GDPR penalties (security failures can get expensive fast)

GDPR allows administrative fines up to €20,000,000 or 4% of global annual turnover (whichever is higher) for certain serious infringements. (GDPR)

Code signing doesn’t “solve GDPR,” but it reduces the risk of supply-chain compromise and improves your ability to demonstrate security controls and traceability after an incident.

Cyber Resilience Act (CRA) penalties + timelines

The CRA (Regulation (EU) 2024/2847) introduces horizontal cybersecurity requirements for products with digital elements. Its penalty article states that certain non-compliance can be fined up to:

  • €15,000,000 or 2.5% worldwide annual turnover (whichever is higher), and other tiers including
  • €10,000,000 or 2%, and €5,000,000 or 1% depending on the type of breach. (EUR-Lex)

Timing also matters: the CRA applies from 11 December 2027, with earlier dates for specific obligations (e.g., some reporting obligations from 11 September 2026 and some provisions from 11 June 2026). (EUR-Lex)

For vendors, this translates into a customer question you should expect to hear more often:

“How do you prove the integrity and origin of what you ship?”

Your best answer includes code signing + signed releases + signed artifacts + verifiable provenance.

Implementation Checklist: Code Signing Best Practices (Practical + Auditable)

If you want code signing that actually holds up in audits and real incidents, implement it as a system—not a developer “nice-to-have”.

1) Enforce Git signed commits

  • Require signed commits on protected branches (main, release/*)
  • Block merges if commits are not verified
  • Require signed tags for releases

2) Secure developer signing keys

  • Prefer hardware-backed keys (or secure enclaves)
  • Require MFA/SSO on developer accounts
  • Rotate keys and remove trust when people change roles or leave

3) Sign what you ship (artifact signing)

  • Sign containers, packages, and binaries
  • Verify signatures in CI/CD and at deploy time

4) Add provenance (supply chain proof)

  • Produce build attestations/provenance so you can prove which pipeline built which artifact from which source

Is Git commit signing the same as code signing?
Git commit signing proves identity and integrity at the source-control level. Code signing often also includes release and artifact signing for what you ship.

Does signed commits stop a compromised developer laptop?
It helps with attribution and tamper-evidence, but you still need endpoint security, key protection, least privilege, reviews, and CI/CD hardening.

What’s the business value?
Less impersonation risk, stronger software supply chain security, faster audits, clearer incident response, and a better compliance posture for US and EU customers.

Takeaway

If you sell software into regulated or security-sensitive markets, code signing and signed commits are no longer optional. They directly prevent commit impersonation, strengthen software supply chain security, and support compliance conversations—especially in the EU where NIS2, GDPR, and CRA penalties can be severe. (EUR-Lex)

If you want, I can also provide:

  • an SEO-focused FAQ expansion (10–15 more questions),
  • a one-page “Code Signing Policy” template,
  • or platform-specific enforcement steps (GitHub / GitLab / Azure DevOps / Bitbucket) written in a customer-friendly way.

#CodeSigning #SignedCommits #GitSecurity #SoftwareSupplyChain #SupplyChainSecurity #DevSecOps #SecureSDLC #CICDSecurity #NIS2 #GDPR #CyberResilienceAct #Compliance #RegTech #RiskManagement #CybersecurityGovernance #SoftwareIntegrity #CodeIntegrity #IdentitySecurity #NonRepudiation #ZeroTrust #SecurityControls #ChangeManagement #GitHubSecurity #GitLabSecurity #SBOM #SLSA #SoftwareProvenance #ArtifactSigning #ReleaseSigning #EnterpriseSecurity #CloudSecurity #SecurityLeadership #CISO #SecurityEngineering #ProductSecurity #SecurityCompliance

The Monopoly of Progress

AI In The Public Sector, Growth, Resilience, Sovereignty Series 3rd Jan 2026 Martin-Peter Lambert
The Monopoly of Progress

Why Abundance, Security, and Free Markets are the Only True Catalysts for Innovation

Introduction: The Paradox of Creation

In the modern economic narrative, competition is lionized as the engine of progress. We are taught that a fierce marketplace, where rivals battle for supremacy, drives innovation, lowers prices, and ultimately benefits society. However, a closer examination of the last three decades of technological advancement reveals a startling paradox: true, transformative innovation—the kind that leaps from zero to one—rarely emerges from the bloody trenches of perfect competition. This notion supports the idea that perfect competition stifles progress and creativity, leading us to question why abundance, security, and free markets are the only true catalysts for innovation, as these environments often look far more like a monopoly with long-term vision rather than a cutthroat market.

This thesis, most forcefully articulated by entrepreneur and investor Peter Thiel in his seminal work, Zero to One, argues that progress is not a product of incremental improvements in a crowded field, but of bold new creations that establish temporary monopolies [1]. This article will explore Thiel’s framework, arguing that the capacity for radical innovation is contingent upon the financial security and long-term planning horizons that only sustained profitability can provide.

The Two Types of Progress

We will then turn our lens to the European Union, particularly Germany, to diagnose why the continent has failed to produce world-dominating technology companies in recent decades, attributing this failure to a culture of short-termism, stifling regulation, and punitive taxation.

Finally, we will dismantle the notion that the state can act as an effective substitute for the market in allocating capital for innovation. Drawing on the work of Nobel Prize-winning economists like Friedrich Hayek and the laureates recognized for their work on creative destruction, we will demonstrate that centralized planning is, and has always been, the most inefficient allocator of resources, fundamentally at odds with the chaotic, decentralized, and often wasteful process that defines true invention.

The Thiel Doctrine: Competition is for Losers

Peter Thiel’s provocative assertion that “competition is for losers” is not an endorsement of anti-competitive practices but a fundamental critique of how we perceive value creation. He draws a sharp distinction between “0 to 1” innovation, which involves creating something entirely new, and “1 to n” innovation, which consists of copying or iterating on existing models. While globalization represents the latter, spreading existing technologies and ideas, true progress is defined by the former.

To understand this, Thiel contrasts two economic models: perfect competition and monopoly.

The Innovation Paradox: Competition vs Monopoly

In a state of perfect competition, no company makes an economic profit in the long run. Firms are undifferentiated, selling at whatever price the market dictates. If there is money to be made, new firms enter, supply increases, prices fall, and the profit is competed away. In this brutal struggle for survival, companies are forced into a short-term, defensive crouch. Their focus is on marginal gains and cost-cutting, not on ambitious, long-term research and development projects that may not pay off for years, if ever [1].

The U.S. airline industry serves as a prime example. Despite creating immense value by transporting millions of passengers, the industry’s intense competition drives profits to near zero. In 2012, for instance, the average airfare was $178, yet the airlines made only 37 cents per passenger trip [1]. This leaves no room for the “waste” and “slack” necessary for bold experimentation.

In stark contrast, a company that achieves a monopoly—not through illegal means, but by creating a product or service so unique and superior that it has no close substitute—can generate sustained profits. These profits are not a sign of market failure but a reward for creating something new and valuable. Google, for example, established a monopoly in search in the early 2000s. Its resulting profitability allowed it to invest in ambitious “moonshot” projects like self-driving cars and artificial intelligence, endeavors that a company struggling for survival could never contemplate.

This environment of abundance and security is the fertile ground from which “Zero to One” innovations spring. It allows a company to think beyond immediate survival and plan for a decade or more into the future, accepting the necessity of financial
waste and the high probability of failure in the pursuit of groundbreaking discoveries. This is the core of the Thiel doctrine: progress requires the security that only a monopoly, however temporary, can provide.

The European Malaise: A Continent of Incrementalism

For the past three decades, a glaring question has haunted the economic landscape: where are Europe’s Googles, Amazons, or Apples? Despite a highly educated workforce, strong industrial base, and significant government investment in R&D, the European Union, and Germany in particular, has failed to produce a single technology company that dominates its global market. The continent’s tech scene is characterized by a plethora of “hidden champions”—highly successful, niche-focused SMEs—but it lacks the breakout, world-shaping giants that have defined the digital age. This is not an accident of history but a direct consequence of a political and economic culture that is fundamentally hostile to the principles of “Zero to One” innovation.

The Triple Constraint: Regulation, Taxation, and Short-Termism

The European innovation deficit can be attributed to a trifecta of self-imposed constraints:

EU Innovation Triple Constraint
  1. A Culture of Precautionary Regulation: The EU’s regulatory philosophy is governed by the “precautionary principle,” which prioritizes risk avoidance over seizing opportunities. This manifests in sprawling, complex regulations like the General Data Protection Regulation (GDPR) and the AI Act. While well-intentioned, these frameworks impose immense compliance burdens, especially on startups and smaller firms. A 2021 study found that GDPR led to a measurable decline in venture capital investment and reduced firm profitability and innovation output, as resources were diverted from R&D to legal and compliance departments [2]. The AI Act, with its risk-based categories and strict mandates, creates further bureaucratic hurdles that stifle the rapid, iterative experimentation necessary for AI development. This risk-averse environment encourages incremental improvements within established paradigms rather than the disruptive breakthroughs that challenge them.
  2. Punitive Taxation and the Demand for Premature Profitability: European tax policies, particularly in countries like Germany where the average corporate tax burden is around 30%, create a significant disadvantage for innovation-focused companies [3]. High taxes on corporate profits and wealth disincentivize the long-term, high-risk investments that drive transformative innovation. Furthermore, the European venture capital ecosystem is less developed and more risk-averse than its U.S. counterpart. Startups often rely on bank lending, which demands a clear and rapid path to profitability. This pressure to become profitable quickly is antithetical to the “wasteful” and often decade-long process of developing truly novel technologies. As a result, many of Europe’s most promising startups, such as UiPath and Dataiku, have relocated to the U.S. to access larger markets, deeper capital pools, and a more favorable regulatory environment [2].
  3. A Fragmented Market: Despite the ideal of a single market, the EU remains a patchwork of 27 different national laws and regulatory interpretations. This fragmentation prevents European companies from achieving the scale necessary to compete with their American and Chinese rivals. A startup in one member state may face entirely different compliance requirements in another, creating significant barriers to expansion. This stands in stark contrast to the unified markets of the U.S. and China, where companies can scale rapidly to achieve national and then global dominance.

This combination of overregulation, high taxation, and market fragmentation creates an environment where it is nearly impossible for companies to achieve the sustained profitability and security necessary for “Zero to One” innovation. The European model, in essence, enforces a state of perfect competition, trapping its companies in a cycle of incrementalism and ensuring that the next generation of technological giants will be born elsewhere.

The State as Innovator: A Proven Failure

Faced with this innovation deficit, some policymakers in Europe and elsewhere have been tempted by the siren song of industrial planning.

Capital Allocation: The Knowledge Problem

The argument is that the state, with its vast resources and ability to direct investment, can strategically guide innovation and pick winners. This is a dangerous and historically discredited idea. The 2025 Nobel Prize in Economics, awarded to Philippe Aghion, Peter Howitt, and Joel Mokyr for their work on innovation-led growth, serves as a powerful reminder that prosperity comes not from stability and central planning, but from the chaotic and unpredictable process of “creative destruction” [4].

The Knowledge Problem and the Price System

Nobel laureate Friedrich Hayek, in his seminal work, dismantled the socialist belief that a central authority could ever effectively direct an economy. He argued that the knowledge required for rational economic planning is not concentrated in a single mind or committee but is dispersed among millions of individuals, each with their own unique understanding of their particular circumstances. The market, through the price system, acts as a vast, decentralized information-processing mechanism, coordinating the actions of these individuals without any central direction [5].

As Hayek wrote, “The economic problem of society is thus not merely a problem of how to allocate ‘given’ resources—if ‘given’ is taken to mean given to a single mind which could solve the problem set by these ‘data.’ It is rather a problem of how to secure the best use of resources known to any of the members of society, for ends whose relative importance only these individuals know” [5].

State-led innovation initiatives inevitably fail because they are blind to this dispersed knowledge. A government committee, no matter how well-informed, cannot possibly possess the information necessary to make the millions of interconnected decisions required to bring a new technology to market. The historical record is littered with the failures of central planning, from the economic collapse of the Soviet Union to the stagnation of countless state-owned enterprises.

Creative Destruction: The Engine of Progress

The work of the 2025 Nobel laureates reinforces Hayek’s critique. Joel Mokyr’s historical analysis of the Industrial Revolution reveals that it was not the product of government programs but of a cultural shift towards open inquiry, merit-based debate, and the free exchange of ideas. The political fragmentation of Europe, which allowed innovators to flee repressive regimes, was a key factor in this process [4].

Aghion and Howitt’s model of “growth through creative destruction” shows that a dynamic economy depends on a constant process of experimentation, entry, and replacement. New, innovative firms challenge and displace established ones, driving progress. This process is inherently messy and unpredictable. It cannot be “engineered” or “guided” by a central planner. Attempts to protect incumbents or strategically direct innovation only serve to entrench mediocrity and stifle the very dynamism that drives growth.

Policies like Europe’s employment protection laws, which make it difficult and expensive to restructure or downsize a failing venture, work directly against this process. A dynamic economy requires that entrepreneurs be free to enter the market, fail, and try again without asking for the state’s permission or being cushioned from the consequences of failure.

The Market at Work: Three Stories of Innovation and Regulation

To make the abstract principles of market dynamics and regulatory friction concrete, consider three powerful stories of technologies that share common roots but followed radically different cost trajectories. These case studies vividly illustrate how free, competitive markets drive costs down and quality up, while regulated, third-party-payer systems often achieve the opposite.

Story 1: LASIK—A Clear View of the Free Market

LASIK eye surgery is a modern medical miracle, yet it operates almost entirely outside the conventional health insurance system. As an elective procedure, it is a cash-pay service where consumers act as true customers, shopping for the best value. The results are a textbook example of free-market success. In the late 1990s, the procedure cost around $2,000 per eye in today’s dollars. A quarter-century later, the price has not only failed to rise with medical inflation but has actually fallen in real terms, with the average cost remaining around $1,500-$2,500 per eye [6].

More importantly, the quality has soared. Today’s all-laser, topography-guided custom LASIK is orders of magnitude safer, more precise, and more effective than the original microkeratome blade-based procedures. This combination of falling prices and rising quality is what we expect from every other technology sector, from televisions to smartphones. It happens in LASIK for one simple reason: providers compete directly for customers who are spending their own money. There are no insurance middlemen, no complex billing codes, and no government price controls to distort the market. The result is relentless innovation and price discipline.

Story 2: The Genome Revolution—Faster Than Moore’s Law

The most stunning example of technology-driven cost reduction in human history is not in computing, but in genomics. When the Human Genome Project was completed in 2003, the cost to sequence a single human genome was nearly $100 million. By 2008, with the advent of next-generation sequencing, that cost had fallen to around $10 million. Then, something incredible happened. The cost began to plummet at a rate that far outpaced Moore’s Law, the famous benchmark for progress in computing. By 2014, the coveted “$1,000 genome” was a reality. Today, a human genome can be sequenced for as little as $200 [7].

This 99.9998% cost reduction occurred in a field driven by fierce technological competition between companies like Illumina, Pacific Biosciences, and Oxford Nanopore. It was a race to innovate, fueled by research and consumer demand, largely unencumbered by the regulatory thicket of the traditional medical device market. While the interpretation of genomic data for clinical diagnosis is regulated, the underlying technology of sequencing itself has been free to follow the logic of the market, delivering exponential gains at an ever-lower cost.

Story 3: The Insulin Tragedy—A Century of Regulatory Failure

In stark contrast to LASIK and genomics stands the story of insulin, a life-saving drug discovered over a century ago. The basic technology for producing insulin is well-established and inexpensive; a vial costs between $3 and $10 to manufacture. Yet, in the heavily regulated U.S. healthcare market, the price has become a national scandal. The list price of Humalog, a common insulin analog, skyrocketed from $21 a vial in 1996 to over $332 in 2019—a more than 1,500% increase [8].

How is this possible? The answer lies in a web of regulatory capture and market distortion. The U.S. patent system allows for “evergreening,” where minor tweaks to delivery devices or formulations extend monopolies. The FDA’s classification of insulin as a “biologic” has historically made it nearly impossible for cheaper generics to enter the market. Most critically, a shadowy ecosystem of Pharmacy Benefit Managers (PBMs) negotiates secret rebates with manufacturers, creating perverse incentives to favor high-list-price drugs. The FTC even sued several PBMs in 2024 for artificially inflating insulin prices [9]. In this system, the consumer is not the customer; the PBM is. The result is a market where a century-old, life-saving technology has become a luxury good, a tragic testament to the failure of a market that is anything but free.

These three stories—of sight, of self-knowledge, and of survival—tell a single, coherent tale. Where markets are free, transparent, and competitive, innovation flourishes and costs fall. Where they are burdened by regulation, obscured by middlemen, and captured by entrenched interests, the consumer pays the price, both literally and figuratively.

Conclusion: Embracing the Monopoly of Progress

The evidence is clear we have a conundrum: true, transformative innovation is not a product of competition alone but in its’ results – not in ensuring same suboptimal outcome by regulated process. It requires an environment of abundance and security where companies can afford to think long-term, embrace risk, and invest in the “wasteful” process of discovery. Peter Thiel’s framework, far from being a defense of predatory monopolies, is a call to recognize the conditions necessary for human progress.

The failure of the EU and Germany to produce world-leading technology companies is a direct result of their hostility to these conditions. A culture of precautionary regulation, punitive taxation, and short-term profitability has created a continent of incrementalism (keep it the same – if not, we cannot deal with setbacks), where the fear of failure outweighs the ambition to create something new. The temptation to solve this problem through state-led industrial planning is a dangerous illusion that ignores the fundamental lessons of economic history.

If we are to unlock the next wave of human progress, we must abandon the comforting but false narrative of perfect competition and embrace the messy, unpredictable, and often monopolistic reality of innovation. This means creating an ecosystem that rewards bold bets and tolerates failure. It means light regulation, competitive taxation, and a culture that celebrates the entrepreneur, not the bureaucrat. The path to a better future is not paved with the good intentions of central planners but with the creative destruction of the free market. It is a path that leads, paradoxically, through the monopoly of progress.

In essence – we need the right balance. The EU has the most potential to maximize output by a minimal input! The US has to catch up on food safety and non capitalistic and predatory capitalism.
We all can learn something from each other – including not mentioned global super powers!

#Insight42 #PublicSectorInnovation #DigitalSovereignty #ZeroToOne #ThielDoctrine #GovTech #DigitalTransformation #GermanyDigital #EUTech #InnovationStrategy #PublicProcurement #SovereignTech #RegulatoryReform #CreativeDestruction #EconomicGrowth #DigitalDecade #SmartGovernment #PublicAdmin #TechPolicy #FutureOfGovernment

References

[1] Peter Thiel, “Competition is for Losers,” Wall Street Journal, September 12, 2014

[9] Federal Trade Commission, “FTC Sues Prescription Drug Middlemen for Artificially Inflating Insulin Drug Prices,” September 20, 2024

Related Topics:
https://insight42.com/unleash-the-european-bull/

Multi Cloud Security

Resilience 26th Dec 2025 Martin-Peter Lambert
Multi Cloud Security

Secure Your Multi-Cloud Infrastructure with absecure

Why this matters (and what it costs if you don’t)

Multi-cloud is awesome… right up until it isn’t.

One minute you’re enjoying flexibility across AWS, Azure, and GCP. The next minute you’re juggling different IAM models, different logging systems, different defaults, different dashboards, and a growing fear that somewhere there’s a “public bucket” waiting to ruin your week.

And here’s the part nobody wants to hear (but everybody needs to): cloud security is a shared responsibility. Your cloud provider secures the underlying infrastructure, but you’re responsible for securely configuring identities, access, data, and services.

So let’s talk about why this matters — in plain language — and how absecure helps you fix it without turning your team into full-time spreadsheet archaeologists.

Why this matters: multi-cloud multiplies risk (quietly)

Multi-cloud doesn’t just add more places to run workloads. It adds more places to:

  • misconfigure access
  • forget a setting
  • miss a log pipeline
  • keep secrets around too long
  • fall out of compliance without noticing

And most teams are already running multi-cloud whether they planned to or not. A 2025 recap of Flexera’s State of the Cloud survey reports organizations use 2.4 public cloud providers on average. SoftwareOne

More clouds = more moving parts = more ways to accidentally ship risk.

What it costs if you don’t fix it (the “ouch” section)

This is the part that makes CFOs stop scrolling.

1) Breaches are expensive (even when nobody “meant to”)

IBM’s Cost of a Data Breach Report 2025 reports a global average breach cost of $4.44M. bakerdonelson.com

That’s not “security budget” money. That’s “we didn’t plan for this” money.

2) Secrets stay exposed for months

Verizon’s 2025 DBIR reports the median time to remediate leaked secrets discovered in a GitHub repository was 94 days. Verizon

That’s three months of “hope nobody finds it.”

3) Public cloud storage exposure is still a real thing

An IT Pro write-up referencing Tenable’s 2025 research reports 9% of publicly accessible cloud storage contains sensitive data, and 97% of that is classified as restricted/confidential. IT Pro

So yes — “just one misconfiguration” can be the whole story.

4) The hidden cost: your team’s time and momentum

Even without a breach, the daily tax is brutal:

  • alert fatigue
  • manual reviews
  • chasing evidence for audits
  • Slack firefighting instead of shipping product

Security becomes the speed bump… and everyone resents it.

Enter absecure: the complete security team (not just a tool)

absecure is built to make multi-cloud security feel less like herding cats and more like running a clean system.

Think of absecure as:

  • visibility (what you have, where it is, what’s risky)
  • prioritization (what matters most right now)
  • remediation workflows (fixes with approvals + rollback + audit trail)
  • compliance automation (evidence without panic)

In other words: less “we have 700 findings” … more “here are the 12 fixes that cut the most risk this week.”

What you get (in customer language)

1) One view across all your clouds

A unified console for AWS/Azure/GCP (+ OCI / Alibaba Cloud if you use them).

2) Agentless scanning (less hassle, faster rollout)

No “install this everywhere” marathon before you see value.

3) Coverage where breaches actually start

  • misconfigurations (public storage, risky network rules, missing encryption)
  • IAM risk (excess permissions, unused roles, dangerous policies)
  • vulnerabilities (VMs/hosts/packages + container image risks)
  • secrets exposure (hardcoded keys/tokens)

4) Compliance without the migraine

CIS Benchmarks are a common baseline for cloud hardening and are widely referenced in security programs.
absecure helps you track posture, map controls, and generate audit-ready reports.

How it works (simple version)

1) Connect your cloud accounts (read-only first)

This keeps onboarding safe and frictionless while you build confidence.

2) Scan continuously (so you catch drift)

Because cloud changes constantly — and drift is where “secure yesterday” becomes “exposed today.”

3) Fix fast (with approvals + rollback)

Turn findings into outcomes:

  • one-click fixes for common misconfigurations
  • approval workflows for higher-risk changes
  • audit logs so you can prove what happened (and when)

How to set it up (practical steps you can follow today)

Here’s a clean “day 1 → day 7” plan that works in real teams.

Day 1: Get the foundations right

Turn on centralized audit logs early. These are your “black box flight recorder” during incidents and audits.

  • AWS: Use CloudTrail (preferably org-wide)
  • Azure: Export Activity Logs / Log Analytics appropriately
  • GCP: Centralize logging with aggregated sinks

Day 2–3: Pick your baseline (so everyone plays the same game)

Start with CIS Foundations for your cloud(s).
This reduces “opinion debates” and replaces them with an agreed standard.

Day 4–5: Fix the “Top 10” highest-impact issues

A great first sprint list:

  • public storage exposure
  • overly permissive IAM / wildcard policies
  • missing encryption defaults
  • risky inbound firewall/security group rules
  • leaked/stale credentials
  • high severity vulnerabilities on internet-facing workloads
  • logging gaps in critical accounts/projects

Day 6–7: Automate what you can (safely)

Start automation with low-risk, high-confidence fixes first.
Then add approvals and rollback for anything that could disrupt production.

Optional (power-user mode): policy-as-code

If you want custom rules (regions, tags, naming, encryption requirements), policy-as-code is a proven approach, often implemented with OPA/Rego.

The “contact us” moment (aka: why teams reach out)

If you’re feeling any of these…

  • “We’re multi-cloud and visibility is fragmented.”
  • “We know we have misconfigs; we just can’t chase them all.”
  • “Audits take too long and evidence is painful.”
  • “We want automation, but we need guardrails.”
  • “Security is slowing delivery and everyone’s frustrated.”

…then this is exactly the kind of problem absecure is built to solve.

What you’ll get if you contact us

  • a fast posture review across your cloud(s)
  • the top risk areas ranked by impact
  • a realistic remediation plan your teams will actually follow
  • a path to continuous compliance evidence (without the chaos)

Contact us for our services (worldwide)

Resources you can cite inside your page (trust builders)

Use these throughout the article as credibility anchors:

  • Shared responsibility (AWS/Azure/GCP)
  • IBM breach cost benchmark bakerdonelson.com
  • Verizon DBIR secret remediation time Verizon
  • Tenable cloud storage exposure findings IT Pro
  • CIS Benchmarks (cloud hardening baseline)
  • Logging setup docs (AWS/Azure/GCP)


#absecure #CloudSecurity #MultiCloud #CSPM #CloudSecurityPostureManagement #DevSecOps #CyberSecurity #ZeroTrust #CloudCompliance #ComplianceAutomation #SecurityAutomation #CloudRisk #VulnerabilityManagement #ContainerSecurity #KubernetesSecurity #IAMSecurity #IdentitySecurity #LeastPrivilege #SecretsManagement #SecretsScanning #SBOM #SPDX #SupplyChainSecurity #CloudMonitoring #ThreatDetection #IncidentResponse #SecurityOperations #SecurityPostureManagement #CISBenchmarks #NIST #SOC2 #ISO27001 #PCIDSS #HIPAA #AWS #MicrosoftAzure #GoogleCloud #OCI #AlibabaCloud #AgentlessSecurity #SecurityTeam

Unleash the European Bull

AI In The Public Sector, Resilience, Sovereignty Series 24th Dec 2025 Martin-Peter Lambert
Unleash the European Bull

Unleashing Innovation in the Age of Integrated Platforms – and Rediscovery of Free Discovery!

In the global arena of technological dominance, the United States soars as the Eagle, Russia stands as the formidable Bear, and China commands as the mythical Dragon. The European Union, with its rich history of innovation and immense economic power, is the Bull—a symbol of strength and potential, yet currently tethered by its own well-intentioned constraints. This post explores how the EU can unleash its inherent creativity and forge a new path to digital sovereignty, not by abandoning its principles, but by embracing a new model of innovation inspired by the very giants it seeks to rival.

The Palantir Paradigm: Integration as the New Frontier

At the heart of the modern software landscape lies a powerful paradigm, exemplified by companies like Palantir. Their genius is not in reinventing the wheel, but in masterfully integrating existing, high-quality open-source components into a single, seamless platform. Technologies like Apache Spark, Kubernetes, and various open-source databases are the building blocks, but the true value—and the competitive advantage—lies in the proprietary integration layer that connects them.

Palantir Integration Model

This integrated approach creates a powerful synergy, transforming a collection of disparate tools into a cohesive, intelligent system. It’s a model that delivers immense value to users, who are shielded from the underlying complexity and can focus on solving their business problems. This is the new frontier of software innovation: not just creating new components, but artfully combining existing ones to create something far greater than the sum of its parts.

In contrast, the European tech landscape, while boasting a wealth of world-class open-source projects and brilliant developers, remains fragmented. It’s a collection of individual gems that have yet to be set into a crown.

Fragmented EU Landscape

The European Paradox: Drowning in Regulation, Starving for Innovation

The legendary management consultant Peter Drucker famously stated, “Business has only two functions — marketing and innovation.” He argued that these two functions produce results, while all other activities are simply costs. This profound insight cuts to the heart of the European paradox. The EU’s commitment to data privacy and ethical technology is laudable, but its current regulatory approach has created a system where it excels at managing costs (regulation) rather than producing results (innovation).

Regulations like the GDPR and the AI Act, while designed to protect citizens, have inadvertently erected barriers to innovation, particularly for the small and medium-sized enterprises (SMEs) that are the lifeblood of the European economy. When a continent is more focused on perfecting regulation than fostering innovation, it finds itself in an untenable position: it can only market products that it does not have.

This “one-size-fits-all” regulatory framework creates a natural imbalance. Large, non-EU tech giants have the vast resources and legal teams to navigate the complex compliance landscape, effectively turning regulation into a competitive moat. Meanwhile, European startups and SMEs are forced to divert precious resources from innovation to compliance, stifling their growth and ability to compete on a global scale.

Regulatory Imbalance

This is the European paradox: a continent rich in talent and technology, yet constrained by a system that favors established giants over homegrown innovators. The result is a landscape where the EU excels at creating rules but struggles to create world-beating products. To get back to innovation, Europe must shift its focus from simply regulating to actively enabling the creation of new technologies.

Unleashing the Bull: A New Path for European Tech Sovereignty

To break free from this paradox, the EU must forge a new path—one that balances its regulatory ideals with the pragmatic need for innovation. The solution lies in the creation of secure innovation zones, or regulatory sandboxes. These are controlled environments where startups and developers can experiment, build, and iterate rapidly, free from the immediate weight of full regulatory compliance.

Innovation Pathway

This approach is not about abandoning regulation, but about applying it at the right stage of the innovation lifecycle. It’s about prioritizing potential benefits and viability first, allowing new ideas to flourish before subjecting them to the full force of regulatory scrutiny. By creating these safe harbors for innovation, the EU can empower its brightest minds to build the integrated platforms of the future, turning its fragmented open-source landscape into a cohesive, competitive advantage.

The Vision: A Sovereign and Innovative Europe

Imagine a future where the European Bull is unleashed. A future where a vibrant ecosystem of homegrown tech companies thrives, building on the continent’s rich open-source heritage to create innovative, integrated platforms. A future where the EU is not just a regulator, but a leading force in the global technology landscape.

The European Bull Unleashed

This vision is within reach. The EU has the talent, the technology, and the values to build a digital future that is both innovative and humane. By embracing a new model of innovation—one that fosters experimentation, prioritizes integration, and applies regulation with wisdom and foresight—the European Bull can take its rightful place as a global leader in the digital age.

References

[1] Palantir and Open-Source Software
[2] Open source software strategy – European Commission
[3] New Study Finds EU Digital Regulations Cost U.S. Companies Up To $97.6 Billion Annually
[4] EU AI Act takes effect, and startups push back. Here’s what you need to know

#DigitalSovereignty #EUTech #DigitalTransformation #Innovation #Technology #EuropeanUnion #DigitalEurope #TechPolicy #OpenSource #PlatformIntegration #CloudSovereignty #DataSovereignty #EnterpriseArchitecture #DigitalStrategy #TechInnovation #EUInnovation #EUProcurement #PublicSector #DigitalAutonomy #TechConsulting #AIAct #GDPR #RegulatoryInnovation #EuropeanTech

What to do when your CDN Fails

Resilience 9th Dec 2025 Martin-Peter Lambert
What to do when your CDN Fails

The Wake – Up! It’s happening again –
What to do when your CDN Fails

Surprise: The Day Cloudflare Stopped

It happened twice in two weeks. On December 5th and again in late November 2025, Mi Cloudflare — one of the world’s largest content delivery networks—experienced critical outages that briefly took portions of the internet offline. For millions of users, websites displayed error pages. For business owners, those minutes felt like hours. In situations like these, it’s crucial to know what to do when your CDN fails. For engineering teams, it sparked an urgent question: Are we really protected if our CDN is our only shield?

The answer is uncomfortable: most companies are not.

Figure 1: Traditional CDN architecture—single point of failure

If you operate a business whose entire web stack depends on a single CDN, this post is for you. We will walk through why single-CDN architectures are brittle at scale, and introduce two proven approaches to eliminate the risk: CDN bypass mechanisms and multi-CDN failover. By the end, you will understand how to design systems that keep serving your users even when a major vendor goes dark.


The Problem: Single Point of Failure at Global Scale

How a Single CDN Becomes Your Weakest Link

Most companies adopt a CDN for good reasons: faster content delivery, DDoS protection, global edge caching, and WAF (Web Application Firewall) services. The architecture looks simple and clean:

User → CDN → Origin Server

The CDN becomes the front door to everything. DNS resolves to the CDN’s IP addresses. The CDN caches static assets, forwards API traffic, and enforces security policies. The origin sits behind, protected from direct access.

This design works beautifully—until the CDN has a problem.

What Happened During the Outages

In both the November and December 2025 Cloudflare incidents, a configuration error or internal incident at Cloudflare’s control plane caused cascading failures across their global network. For affected customers, the symptoms were clear:

  • All traffic to Cloudflare-fronted services returned 5xx errors
  • DNS queries continued to resolve, but reached an unreachable service
  • Origin servers remained healthy and online, but were invisible to end users because all paths led through the CDN
  • Workarounds required manual intervention—logging into the CDN dashboard (if reachable), changing DNS, or calling support during an outage

The irony is sharp: the infrastructure designed to provide high availability became the source of unavailability.

Figure 2: Multi-CDN failover strategy—removes single point of failure

The Business Impact

For a SaaS company with $100k monthly revenue, even 15 minutes of CDN-induced downtime can mean:

  • Lost transactions: $100k ÷ 43,200 seconds × 900 seconds ≈ $2,000+
  • Customer trust erosion and support tickets
  • Potential SLA breaches and compensation obligations
  • Reputational damage in competitive markets

For fintech, healthcare, and e-commerce, the costs are exponentially higher. And yet, many teams assume “the CDN vendor will not fail” because they have redundancy internally.

They do. But you depend on them all the same.


Solution 1: CDN Bypass—The Emergency Exit

Why Bypass Matters

A CDN bypass is not about abandoning your primary CDN during normal operations. Instead, it is a controlled, secure pathway to your origin server that activates only when the CDN itself becomes the problem.

Think of it like a fire exit: you do not walk through it every day, but it saves lives when the main entrance is blocked.

How CDN Bypass Works

The architecture operates in layers:

Layer 1: Health Monitoring
Continuous health checks on your primary CDN—latency, error rate, reachability, and geographic coverage. If thresholds are breached (e.g., 5% of regions report 5xx errors or p95 latency > 2 seconds), an alert is triggered and bypass logic is engaged.

Layer 2: Dual Routing
You maintain two DNS records:

  • Primary: Points to your CDN (used under normal conditions)
  • Secondary / Bypass: Points to your origin or a hardened entry point (activated only on CDN failure)

Switching between them is automated—no manual DNS editing during an incident.

Layer 3: Origin Hardening
Direct access to your origin is dangerous if uncontrolled. You must protect it with:

  • IP Allow-lists: Only accept requests from your bypass management service or approved monitoring endpoints
  • VPN / Private Connectivity: Route bypass traffic through a secure tunnel (e.g., AWS PrivateLink, Azure Private Link)
  • WAF and Rate Limiting: Apply the same security policies you had at the CDN to the direct path
  • Header Validation: Ensure only traffic from your bypass orchestration layer is accepted

Layer 4: Gradual Traffic Shift
Once bypass is active, traffic does not all migrate at once. Instead:

  • Begin with 5-10% of traffic on the direct path
  • Monitor for errors and latency
  • Ramp up to 100% over 5-10 minutes
  • If issues arise, revert to CDN automatically

Figure 3: Origin server protection during bypass mode

The Bypass Playbook

A well-designed bypass system includes:

  1. Automated Detection: Monitor CDN health continuously; do not wait for customer complaints
  2. Runbook Automation: Execute failover logic without human intervention—speed is critical
  3. Graceful Degradation: Bypass mode may not include all CDN features (like edge caching). Accept lower performance to avoid complete outage
  4. Recovery and Rollback: Once the CDN recovers, automatically shift traffic back after a safety window
  5. Incident Logging: Record what happened, when, and why for post-incident review

Who Should Use Bypass?

Bypass is ideal for:

  • E-commerce platforms, SaaS applications, and marketplaces where every minute of downtime is quantifiable revenue loss
  • Services with strict SLAs or compliance requirements (fintech, healthcare)
  • Teams with engineering capacity to operate a secondary resilience layer
  • Businesses that can tolerate reduced performance (no edge caching, longer latency) for short periods to stay online

It is not a replacement for a good CDN, but a safety net when your primary CDN fails.


Solution 2: Multi-CDN with Intelligent Failover

Moving Beyond Single-Vendor Lock-In

While CDN bypass solves the immediate problem, a more comprehensive approach is to distribute load across multiple CDN providers. This removes the single point of failure entirely and offers additional benefits: better performance, cost negotiation, and the ability to choose the best CDN for each use case.

Multi-CDN Architecture

In a multi-CDN setup, traffic is shared between two or more independent CDN providers:

Typical Stack:

  • Primary CDN: Cloudflare (or AWS CloudFront, Akamai, etc.) — handles 60-70% of traffic
  • Secondary CDN: Another global provider with complementary strengths — handles 30-40% of traffic
  • Routing Layer: DNS-based or HTTP-based intelligent routing that steers traffic based on real-time metrics

Figure 4: Network resilience with multi-CDN anomaly detection

How Intelligent Routing Works

Instead of static 50/50 load balancing, smart routing adjusts in real time:

Real-Time Metrics:

  • Latency: Route users to the CDN with lower p95 latency in their region
  • Error Rate: If one CDN returns 5xx errors >1%, shift traffic away automatically
  • Cache Hit Ratio: Some CDNs cache better for your content type; route accordingly
  • Regional Availability: If a CDN loses an entire region, route around it

Routing Methods:

  1. DNS-Level (GeoDNS): Return different CDN A records based on user geography and health checks. Simplest but less granular
  2. HTTP-Level (Application Layer): A small proxy or load balancer sits before both CDNs, making per-request decisions. More powerful but adds latency
  3. Dedicated Multi-CDN Platforms: Third-party services (IO River, Cedexis, Intelligent CDN) manage routing and billing across multiple CDNs as a managed service

Practical Setup Example

DNS Query: cdn.example.com

Resolver checks health of both CDNs

CDN-A: Latency 50ms, Error Rate 0.1%, Status OK
CDN-B: Latency 120ms, Error Rate 0.2%, Status OK

Decision: Route to CDN-A

User downloads content from CDN-A at 50ms

If CDN-A later spikes to 2% error rate:

Next query routes to CDN-B instead
Existing connections may drain gracefully
Traffic rebalances to healthy provider

Cache Warm-up and Cold Starts

One challenge with multi-CDN is that both CDNs must be warmed with your content. If you only route 30% of traffic to CDN-B, it will have more cache misses and higher latency to origin during the failover period.

Solutions:

  • Dual Caching: Proactively push your most critical assets to both CDNs daily
  • Warm Traffic: Send a small amount of traffic (10-20%) to the secondary CDN constantly to keep cache warm
  • Keep-Alive Connections: Maintain a baseline of requests to the secondary CDN even if not actively used

Unified Security and Configuration

For multi-CDN to work without surprising users, security policies must be consistent across both providers:

  • SSL/TLS Certificates: Same domain, same cert on both CDNs
  • WAF Rules: Mirror your DDoS and WAF policies between providers. A bypass to CDN-B should not have weaker protection
  • Cache Headers and Directives: Both CDNs should honor the same TTL and cache rules
  • Custom Headers and Transformations: If you inject headers or modify responses, do it consistently

Figure 5: Failover system in cloud—automatic traffic rerouting

Who Should Use Multi-CDN?

Multi-CDN is ideal for:

  • Large enterprises serving global traffic where downtime has severe financial impact
  • Companies with high volumes that can negotiate favorable rates with multiple providers
  • Organizations that want to avoid vendor lock-in and maintain negotiating leverage
  • Businesses with diverse content types (streaming, APIs, static, dynamic) that benefit from specialized CDNs

Multi-CDN is more complex than single-CDN, but also more resilient and often cost-effective at scale.


Comparison: Single CDN, Bypass, and Multi-CDN

AspectSingle CDN OnlyCDN + BypassMulti-CDN
Availability During CDN OutageHigh downtime riskCritical paths onlineAuto-rerouted
Setup ComplexityLowMediumHigh
Operational OverheadLowMediumMedium-High
Cost$$$$$$$$-$$$$
Performance (Normal State)HighHighHigh (optimized)
Performance (Bypass/Failover)N/AReduced (no edge cache)Maintained
Security ConsistencyVendor-managedManual hardening neededMust be unified
Time to Restore ServiceMinutes to hoursSeconds (automatic)Milliseconds (automatic)
Vendor Lock-In RiskHighMediumLow

Table 1: Table 1: Comparison of CDN resilience strategies


Designing for Your Organization

Assessment Questions

Before choosing bypass, multi-CDN, or both, ask yourself:

  1. What is the cost of 1 hour of downtime? If it exceeds $10k, invest in resilience now.
  2. Do we have geographic concentration risk? If most users are in one region where one CDN has weak coverage, diversify.
  3. What is our incident response capability? Bypass requires automated systems; multi-CDN requires sophisticated routing. Do we have the team?
  4. Is vendor lock-in a concern? If yes, multi-CDN reduces risk.
  5. What is our compliance posture? Some industries require redundancy by regulation. Build it in from the start.

Phased Implementation Roadmap

Phase 1 (Weeks 1-4): Foundation

  • Audit current CDN configuration and dependencies
  • Identify critical user journeys (auth, checkout, APIs)
  • Design origin hardening and bypass playbooks
  • Set up continuous health monitoring

Phase 2 (Weeks 5-8): Bypass Ready

  • Implement health checks and alerting
  • Build DNS failover automation
  • Harden origin server access controls
  • Test bypass in staging; verify automatic recovery

Phase 3 (Weeks 9-12): Multi-CDN (Optional)

  • Onboard secondary CDN provider
  • Replicate security and cache configuration
  • Deploy intelligent routing layer
  • Gradual traffic shift and optimization

Each phase is low-risk if executed in staging first.


The Role of Managed Services

Building and operating these resilience layers yourself is possible but demanding. It requires:

  • Deep DNS and networking expertise
  • Continuous monitoring and alerting systems
  • Incident response runbooks and automation
  • Compliance and audit trails
  • 24/7 on-call coverage for failover management

This is where specialized vendors and managed services add value. Services like Insight 42 help engineering teams:

  • Design resilient CDN architectures tailored to your traffic patterns and risk tolerance
  • Implement automated bypass and multi-CDN routing without reinventing the wheel
  • Operate these systems with 24/7 monitoring, alerting, and runbook execution
  • Optimize performance and cost by continuously tuning routing policies and cache behavior
  • Certify compliance and SLA adherence through detailed incident logging and remediation

A managed CDN resilience service typically pays for itself within one incident cycle by preventing revenue loss and reducing engineering overhead.


Next Steps: Start Your Assessment

The Cloudflare outages of November and December 2025 are not anomalies—they are signals that single-CDN dependency is a business risk, not a technical oversight.

You can take action today:

  1. Run a scenario test: Imagine your primary CDN goes offline right now. Could your engineering team route traffic to an alternate path in under 5 minutes? If not, you have a gap.
  2. Calculate your downtime cost: Quantify what one hour of unavailability means to your business in lost revenue, SLA penalties, and reputational damage.
  3. Engage a resilience partner: Schedule a consultation to walk through bypass and multi-CDN options tailored to your infrastructure and risk profile.

We offer a free CDN Resilience Assessment where we review your current architecture, simulate a CDN failure, quantify business impact, and outline a concrete 12-week roadmap to eliminate single points of failure.

No vendor lock-in. No long contracts. Just pragmatic engineering that keeps your services online.

For more information contact us

Related Articles:
[1] The Sovereignty Series (Part 1 of 5): The Myth of the Impenetrable Fortress
[2] Microsoft Fabric: (Part 2 of 5)
[3] Microsoft Fabric: (Part 3 of 5)
[4] Cloud Adoption Migration