Trigger warning: this is part rant, part genuine exploration, and entirely my opinion. As usual. Also; research heavy. I wanted this to be informed.
I’ve been thinking about this one for a while.
In a previous post I argued that not everyone who writes code is an engineer, and that the title carries expectations. In another I complained about the lack of standards in an industry that rewards speed and hype over depth. And more recently I got on my soapbox about fundamentals mattering — about how the people who actually understood the machines they worked on are slowly dying off, and what we’re being left with instead.
And more importantly, I posted about the need to encourage and train juniors, not alienate the next generation of engineers, where I ended with a question: is apprenticeship the answer?
Those four posts have a logical conclusion. And it’s this one.
Maybe software engineering should be formally regulated. Licenses. Ethical oaths. Personal liability. Apprenticing. The whole deal.
I’m not sure I’m actually for it. But I’m increasingly struggling to argue against it.

How other professions got here
Here’s the thing: medicine, civil engineering, and accounting were all once as lawless as software development is today. And they only got regulated for one of two reasons — people died, or people lost money they’d trusted you with.
Let’s take them in turn.
Medicine: quacks, patent medicines, and 107 dead children
In 19th century America, quack doctors outnumbered legitimate physicians 3 to 1. Anyone could hang a shingle. Diploma mills sold medical degrees to people who had never treated a patient. Patent medicines laced with morphine and cocaine were marketed openly to children. The professional codes and licensing boards we take for granted today didn’t emerge from good intentions — they emerged from body counts.
In 1937, Elixir Sulfanilamide — a cough syrup containing antifreeze — killed 107 people, many of them children. Selling a toxic drug wasn’t even illegal at the time. Then came thalidomide, marketed in 46 countries as a sedative for pregnant women and causing severe birth defects in up to 20,000 babies. The Flexner Report of 1910 surveyed all 155 US medical schools and found most beyond repair — within 25 years, 75% had closed. A 2025 NBER study found that where bad medical schools shut down, infant mortality in nearby areas fell by 8%. The incompetent doctors being trained were literally killing babies.
Engineering: bridges fell, then licences followed
The Quebec Bridge collapse of 1907 killed 75 construction workers because a consulting engineer named Theodore Cooper increased the main span without requiring anyone to recalculate the structural loads. Warning signs had been visible for weeks. His telegram to stop adding load arrived too late.
Wyoming introduced the first US professional engineering licence that same year. By 1947, all US states had PE licensing laws. The 1981 Hyatt Regency walkway collapse — 114 killed when a seemingly minor design change doubled the load on a connection that could only handle 30% of building code requirements — established the principle that the engineer of record cannot delegate away ultimate responsibility. The PE who stamped those drawings lost his licence. Personal. Consequences.
In Canada, they went further. In 1925, inspired directly by the Quebec Bridge disaster, every engineering graduate began receiving an iron ring worn on the little finger of the working hand. It physically rubs against every document you sign. The ceremony includes a promise to “not henceforward suffer or pass, or be privy to the passing of, Bad Workmanship or Faulty Material.” The obligation is literally on your skin.
There is no software equivalent of that iron ring. There never has been.
Accounting: the one that proves it’s not just about death
This is where the argument gets interesting — because if you think professional regulation only applies to fields where people physically die, accounting demolishes that excuse entirely.
The chartered accountant as a formal designation traces to Scotland in 1854, when 49 Glasgow accountants petitioned Queen Victoria for a royal charter. Their argument was simple: financial competence is a matter of public trust. People rely on accurate numbers. If you’re wrong — or worse, if you lie — you destroy livelihoods. The Institute of Chartered Accountants in England and Wales was established in 1880. New York became the first US state to licence CPAs in 1896.
And even with all that formal structure in place, the profession’s history is punctuated by scandals that forced it to tighten further. Prior to the 1929 stock market crash there was little substantive regulation — the crash and the ensuing Securities Acts of 1933 and 1934 formalised the CPA’s role in public markets. The McKesson & Robbins fraud of 1938 — where a company fabricated $19 million in inventory that auditors simply never verified — prompted the most extensive SEC investigation of the audit profession to that point. Then came Enron and WorldCom, which destroyed Arthur Andersen — one of the five largest accounting firms in the world — and produced the Sarbanes-Oxley Act, imposing criminal penalties for knowingly certifying false financial reports.
The pattern is identical to medicine and engineering: self-regulation worked until it didn’t, scandal forced the issue, and the profession that resisted accountability found itself regulated from the outside.
What I find most relevant about accounting is the underlying principle of what the licence represents. It doesn’t just say “this person won’t steal from you.” It says “this person has demonstrated competency in the foundational concepts of their field, and if they sign off on something, they are personally accountable for that sign-off.” You cannot become a chartered accountant without demonstrating you understand double-entry bookkeeping, audit principles, and financial reporting standards. These are non-negotiable baseline requirements, because the people whose money you’re handling deserve to know you understand the basics of what you’re doing.
There is no equivalent requirement in software.
And software is increasingly where, not only people’s money lives, but also their privacy, their safety, and their trust.
And we let anyone with a computer and access to the internet or some AI subscription become a “software engineer.”

A personal example that still makes me angry
I was at a conference last year and sat through a talk by a fintech founder explaining how their platform had been compromised and their customers had lost millions.
As he walked through what went wrong, I felt my blood pressure rise — because everything he described were solved problems. Lack of atomic operations. Missing idempotency. Race conditions in financial transaction processing. No proper rate limiting. These aren’t obscure academic concepts buried in a research paper somewhere. They are fundamental properties of any financial system, as well-established as double-entry bookkeeping. Every serious textbook on distributed systems covers this. Every engineer who has spent meaningful time building anything that moves money should understand what a race condition is and why it matters.
But this founder didn’t. And yet he built a fintech product, deployed it to real customers, and handled (and it bears repeating, LOST) real money.
The talk was called “Lessons Learned.”

That framing is what made me furious. “Lessons learned” implies the lessons weren’t already available. They were. In books. In documentation. In every serious systems engineering course. This wasn’t a failure of discovery — it was a failure of competence and, more fundamentally, a failure of professional obligation. His customers trusted him with their money. He didn’t understand the basics of the domain he was operating in. And when it went wrong, the industry’s response was to give him a speaking slot.
A chartered accountant who didn’t understand debits and credits would lose their licence. A structural engineer who couldn’t calculate load-bearing capacity would be struck off. There would be no “lessons learned” keynote. There would be consequences.
In software, there are no consequences. Just pivot stories and conference talks.

The software profession’s embarrassing attempt at self-regulation
This part I find genuinely sad.
In 1999, the IEEE and ACM jointly produced the Software Engineering Code of Ethics and Professional Practice. Eight principles. Reads beautifully. Entirely voluntary. No enforcement mechanism. Known mainly in academic circles. Clause 1.03 says software engineers shall “approve software only if they have a well-founded belief that it is safe.” Nobody has ever lost anything for violating it.
Then in 2013, the NCEES introduced an actual PE exam for software engineering — a real, formal, professional engineering licence pathway. The first of its kind in the US.
Do you want to guess how many candidates sat for it over its entire lifespan?
Total. Across five administrations. Over six years.

They discontinued it in 2019 — not because it was a bad idea, but because demand was so low it couldn’t be justified. For context: a single sitting of the PE Chemical Engineering exam attracted more candidates than the software engineering exam did across its entire existence.
The ACM formally declared itself opposed to software engineer licensing in 1999, calling it “premature.” A follow-up task force co-chaired by John Knight and Nancy Leveson was more honest about why: no agreed body of knowledge, a field that changes too fast for stable licensing criteria, and — the killer — an industrial exemption in most US states that would mean “virtually everyone who designs or writes safety-critical software would be exempt” anyway.
We had the opportunity to build the iron ring moment for software. We collectively shrugged and went back to shipping features.
The floor dropped before AI arrived
There’s a narrative that’s been popular in tech for a long time: you don’t need a degree. Gates dropped out. Dell dropped out. Zuckerberg dropped out. And for a while, I was largely okay with this idea.
Because here’s what was actually true in that era: breaking into the industry without a degree was genuinely hard. The people who did it worked obsessively to prove themselves. They learned deeply, contributed to open source, built things out of curiosity, stayed up at 2am not because a deadline forced them to but because they couldn’t stop thinking about the problem. They weren’t bypassing the difficulty — they were taking the harder road because they loved the craft. Dev salaries got as high as they did precisely because the skill was hard to acquire and required real dedication to develop. The market was reflecting something real.
What’s changed isn’t the existence of an alternative path. It’s the reason people are on it.
Bootcamps arrived and compressed the timeline. Then AI arrived and compressed it further. And somewhere along the way, software development got rebranded as a fast-track to a six-figure salary — a financial pivot, not a vocation. The forums filled with people asking which stack has the highest starting salary, not which problem is most interesting to solve. That’s a different person, with different motivations, building different things — and building them with far less hard-won understanding than the self-taught engineers who came before them.
And that’s the thing that people seem to miss.
The self-taught developer who grinded for years to prove themselves without a degree and the bootcamp graduate who wants to ship a fintech app after twelve weeks are not the same argument. Treating them as equivalent is what got us here.

And honestly, the motivation question runs even deeper than qualifications — but that’s an argument for another post.
Now add vibe coding to all of this
I’ve written before about my dislike of vibe coding — and just to re-iterate, my issue has never been with AI-assisted development. It’s with the vibe. The wholesale surrender of understanding.
When Andrej Karpathy coined “vibe coding” in early 2025 he was, to be fair, talking about throwaway projects. But within months, Y Combinator reported that 25% of their Winter 2025 batch had codebases that were 95% AI-generated. Production codebases. For funded startups. In fintech, health, infrastructure.
Forty-six percent of code written by GitHub Copilot users is now AI-generated — rising to over 60% for Java developers. And a 2025 code quality study found that AI co-authored code contains 2.74 times more security vulnerabilities than human-authored code.
So we have: no professional standards, no licensing, no ethical oath, no accountability mechanism — and now an accelerating shift toward code that no individual human fully wrote, understands, or is responsible for.

The conference founder who didn’t know what a race condition was? At least he wrote his own bugs. Now imagine that same system being built by someone who didn’t write the code and didn’t understand it.
Who exactly is accountable when that code fails?
The question isn’t hypothetical. The Therac-25 radiation machine killed at least three patients in the 1980s due to a race condition. Boeing’s MCAS software — reportedly outsourced to engineers paid as little as $9 an hour — killed 346 people. The British Post Office’s Horizon accounting system sent 236 innocent people to prison — the largest miscarriage of justice in UK legal history. The CrowdStrike update of July 2024 crashed 8.5 million machines in 78 minutes, causing an estimated $5.4 billion in losses. Volkswagen engineers deliberately wrote software to cheat emissions tests in 11 million vehicles — not incompetence, but the complete absence of a professional ethical obligation powerful enough to override a corporate instruction.
In the Hyatt Regency collapse, the engineer of record lost his licence. After the 737 MAX crashes? Not one software engineer faced professional consequences. The accountability dissolved into nothing.
The world is trying to regulate AI. It’s regulating the wrong thing.
There’s a quote that has been circulating recently that stopped me cold. It’s from an IBM internal training document, dated 1979:

The original document was reportedly destroyed in a flood. But the idea survived because it’s simply true. A machine has no moral agency. It cannot understand consequences. It cannot be punished, shamed, or struck off. Accountability requires a person.
IBM understood this 46 years ago. The world’s AI governance frameworks seem to have forgotten it.
Take the EU AI Act — the most comprehensive AI regulation ever written, in force since August 2024. It is genuinely ambitious. It bans certain AI practices outright, categorises systems by risk level, and imposes heavy obligations on high-risk AI in healthcare, law enforcement, and critical infrastructure. Fines reach €35 million or 7% of global annual turnover.
Every single obligation in it falls on organisations. Providers. Deployers. Importers. Distributors. The conformity assessment process produces a CE mark on the product. There is no licence for the individual who built it. No qualification gate. No personal accountability. Article 4 requires companies to ensure staff have “sufficient AI literacy” — but the European Commission has explicitly clarified that organisations are not required to make their staff obtain formal certifications. It’s an internal training obligation, not an external licensing requirement.
The NIST AI Risk Management Framework — the US equivalent — structures everything around organisations. ISO/IEC 42001, the international AI management standard, certifies companies, not people, on three-year cycles. IEEE’s Ethically Aligned Design and the newer IEEE 7000 standard for ethical system design address organisational process. Even the AI safety movement — Anthropic, OpenAI, DeepMind — focuses on model-level risks: alignment, dangerous capabilities, loss of human control. All important work. All still asking about the machine and its handlers, not about whether the engineers building it are qualified and personally accountable.
The pattern reveals a structural blind spot. Academics call it the “responsibility gap”: when an automated system causes harm, it cannot be held accountable; but the chain of developers, deployers, and managers is so diffuse that no individual human is clearly accountable either. You are left with a gap where responsibility should be.
Every AI governance framework in existence tries to fill that gap by assigning it to an organisation. Compliance teams. Risk management processes. Documentation requirements. CE marks on products. And all of that is genuinely better than nothing.
But organisations can’t be struck off. They can be fined, sued, or wound up — but they feel none of it the way a person feels losing their licence to practise. The IBM slide didn’t say “a company can never be held accountable.” It said a computer can’t. The implied answer was always a human being — a specific, named, qualified individual whose name is on the record and whose professional future depends on getting it right.
The world is spending enormous energy regulating the machine half of IBM’s equation. Nobody is seriously tackling the human half.

So why not just regulate it?
Here’s where I genuinely get stuck, because the objections are real, and I’m not gonna pretend this is an easy problem.
Software is not a bridge. The field is enormous and changes faster than any licensing body can track. The ACM’s own task force raised the question that nobody has answered: who exactly gets licensed? Requirements writers? Designers? Testers? Managers? Requiring a licence to write any code at all would be absurd — and the line between “critical” and “non-critical” software is blurrier than it sounds. Some are obvious, like game developers probably don’t need it. But anyone working in fintech or healthtech, probably, but also not always. The line in the sand is not clear.
What about open-source? Heartbleed — a catastrophic vulnerability that compromised roughly 500,000 servers — existed in OpenSSL, maintained by four volunteers on essentially no budget. Do we licence volunteers? And international borders make enforcement nearly meaningless.
It’s also worth noting that software isn’t unique in being unregulated at the high-stakes layer. Management consulting — McKinsey, Bain, BCG — requires no licence, no professional body with teeth, and no oath, despite routinely influencing billion-dollar decisions and public institutions. Financial influencers give investment advice to millions of followers with zero formal accountability. Both fields are now attracting regulatory scrutiny precisely because the harm has become too visible to ignore. The pattern is familiar: unregulated until it isn’t.
Accounting provides a useful cautionary tale too. After Enron and WorldCom, Sarbanes-Oxley imposed such onerous compliance costs that it arguably pushed companies toward staying private longer and shifted activity to less regulated jurisdictions. And Arthur Andersen’s auditors largely went along with Enron’s schemes despite being licensed professionals with explicit ethical obligations. Individual licensing doesn’t automatically fix institutional rot. The worst failures — in software, accounting, anywhere — tend to be organisational, driven by incentives that override individual professional judgment.
But here’s what I actually believe
I’ve always been against people who do a job purely for the money. If the salary is the only reason you’re here, I don’t want your help building anything that matters. Whether it’s the plumber fixing things in my house, or the waitron at my favourite restaurant, or the financial manager making sure I can retire comfortably. You have to enjoy your work, and more importantly, take pride in your work.
And I think that’s the core of what’s missing right now — not just regulation, but professional identity. The sense that this is a craft, not a commodity. That what you build matters, and that you are personally responsible for it.

Medicine has the Hippocratic Oath. Engineers in Canada have the iron ring. Both are largely symbolic — neither has direct legal force. And yet people describe those moments as formative. Something shifts in how they think about what they do. There have been multiple proposals for a Hippocratic Oath for software engineers. Microsoft’s Brad Smith called publicly for a “Hippocratic Oath for coders” in 2018. Robert C. Martin’s “Programmer’s Oath” opens: “I will not produce harmful code.” None of it has gone anywhere. Because without teeth, an oath is just words on a website.
I think a tiered approach is the realistic path — but it doesn’t need to look like medicine or a PE licence. The closest model is one I’ve already asked about, without fully answering.
The plumber in your wall is more regulated than the engineer writing code for your bank.
Most people don’t realise how structured the trades actually are. To become a licensed plumber in most US states, you start as an apprentice — four to five years working under a licensed journeyman, learning on the job while someone qualified watches over your shoulder. Then you sit a licensing exam to become a journeyman yourself. After more experience you can work toward master plumber status — the level at which you can sign off on work independently, run a crew, or start a business. In most states, it is illegal to do plumbing work without a licence. Not frowned upon. Illegal. Electricians follow the same tiered structure.
No university degree required. But you earn the right to work unsupervised. You don’t start there.
Software development already has an informal version of this. Junior developers get code reviewed. PRs need approval. In any functioning team, a senior signs off before anything critical goes to production. The structure exists. It’s just invisible, unenforceable, and entirely at each company’s discretion to maintain — or abandon the moment a deadline looms or a founder decides they can vibe-code their way to launch.
What if we formalised it, just for the domains where it matters? Not a PE licence for every developer. Not a medical board for every coder. But a tiered structure for software in regulated industries — healthcare, finance, aviation, critical infrastructure — where you cannot deploy independently until you have demonstrated competency under supervision, and where a licensed senior engineer puts their name on what ships to production. The way a master plumber puts their name on the pipes in your wall.
The trades didn’t need a revolution to get there. They needed the recognition that some work is consequential enough to demand that a qualified person is accountable for it. Not the company. Not the process. Not the AI that generated it. A person.
That’s all I’m really asking for.

One last thought
The conference founder with his “lessons learned” talk isn’t a villain. He’s a symptom. He built something for a regulated industry without understanding that industry’s foundational requirements, deployed it to real customers, caused real harm, and then got rewarded with a speaking slot — because the software industry has no framework for distinguishing “brave failure” from “preventable negligence.”
That distinction matters. In every other field that handles people’s health, money, or safety, there’s a mechanism — imperfect, but present — for asking: did you know what you were doing when you did it? And if the answer is no, there are consequences.
We don’t have that mechanism in software. And we’re building the infrastructure of the modern world.
IBM knew in 1979 that the machine can never be held accountable, and that therefore a human must be. We’ve spent 46 years building increasingly sophisticated machines and increasingly sophisticated frameworks for managing them organisationally, and we still haven’t answered the second half of that sentence.
That’s the part I can’t argue my way out of.

I hold a B.EngSci in Computer Systems from Stellenbosch University, so there is probably some bias. But I’ve spent years caring deeply about this profession long before that. I’ve seen enough to have an opinon of what it should mean to be a software engineer. And I’m increasingly convinced that bias is correct. And if you want to accuse me of gatekeeping? You are correct. That’s exactly what I’m proposing. I can’t help being convinced it’s what we need. I just don’t know how…
