Article image
SEIKOURI Inc.

Ethics Theater - How Responsible AI became a brand layer and a legal risk

Markus Brinsa 24 February 10, 2026 10 10 min read Download Web Insights Edgefiles™

Sources

Ethical AI” is no longer just marketing jargon – it’s a battleground of compliance, oversight, and ethics. In practice, it means meeting hard legal and social requirements: preventing bias and harm, ensuring transparency, and respecting data rights. It isn’t a PR slogan or an optional add-on.  True “ethical AI” requires embedding fairness, safety, and accountability into how systems are built and used – not just slapping “guidelines” on slick new products.  In the EU, for example, the AI Act categorically bans manipulative or surveillance-based AI and forces high-risk systems (from hiring tools to biometric ID) to undergo rigorous risk management, documentation, and oversight.  In the US, federal leaders have issued a voluntary “AI Bill of Rights” with five principles (safe and effective systems, fairness, privacy, notice/explanation, and human fallback) , and NIST’s AI Risk Management Framework offers detailed (but non-binding) guidance to build trustworthiness into AI workflows.  Companies that treat “ethical AI” as mere window dressing face real consequences: regulators impose steep fines under the GDPR or the AI Act for misuse, and courts are already holding vendors and users directly liable for biased outcomes (as in the recent US case of Mobley v. Workday).

We must also distinguish “ethical AI” from “ethically sourced AI.” Ethical AI covers how a model behaves and how it’s governed – fairness, transparency, safety, and accountability in deployment.  Ethically sourced AI focuses on the origins of the data and code. It’s analogous to “fair trade” – meaning training data gathered with clear consent, legal licensing, and respect for privacy.  For example, ethical sourcing requires data provenance: if people’s photos, writing, or code are used, it should be done with permission or a license, not scraped surreptitiously from the web.  It requires consent: no hidden harvesting of individuals’ data, and special care with sensitive content (privacy laws such as GDPR enforce this).  It requires transparency on licensing: copyrighted texts or images should not be used for training without prior clearance.  In short, ethically sourced AI concerns how you obtain the inputs to your models, while ethical AI covers how the model behaves and is managed (fairly, explainably, and without causing harm).  Both matter.  A responsibly behaving model built on illicit data is not “ethical,” and vice versa – proper data sourcing alone doesn’t guarantee a fair or safe outcome.  (For instance, Clearview AI infamously violated the letter and spirit of ethical sourcing by scraping 3 billion face images from social media without consent, and its intrusive behavior underlines how poor data ethics can derail a business.)

Europe’s Heavy Hand: The AI Act’s Mandates

In Europe, “ethical AI” is rapidly becoming law.  The EU’s landmark AI Act (voted into law in 2024) does more than nudge companies – it demands them to prove trustworthiness.  AI systems are bucketed by risk. The worst systems – real-time face recognition for random people, social scoring, manipulative tools – are flat-out prohibited.  High-risk applications (think automated hiring scans, credit underwriting, medical devices, autonomous driving) face strict controls: providers must conduct conformity assessments, implement AI-specific risk management, log decision-making processes, and demonstrate fairness and accuracy.  Transparency obligations apply even to “limited risk” tools such as chatbots or deepfakes (users must be informed they’re interacting with AI).  Meanwhile, “minimal risk” applications (like video game AI or basic spam filters) are mostly unregulated – though the rapid emergence of general-purpose generative AI has pushed even chatbots and image tools into scrutiny.

Crucially, the EU AI Act is enforceable: violators face fines of up to 7% of global turnover for outright illegal AI and up to 3% for high-risk compliance breaches.  The Act also extends extraterritorially: any company (even outside Europe) that places a high-risk system on the EU market must comply. Europe is forcing AI vendors to embed ethics into product lifecycles.  ISO and standards bodies are stepping in to help: for example, ISO/IEC 42001:2023 – the first international AI management system standard – lays out a certification framework so companies can prove they have processes for risk assessment, impact studies, data governance, human oversight, and continuous monitoring.  Swiss regulators explicitly cite ISO 42001 as a means to meet EU regulations.  Together, the EU Act and ISO frameworks turn “ethical AI” from a slogan into concrete obligations: safe design, bias testing, documentation, and audit trails.

America’s Voluntary Guardrails: Guidelines and Frameworks

In the U.S., binding AI laws remain scarce, but guidelines and federal enforcement doctrines are evolving rapidly. The White House’s AI Bill of Rights (2022) is not a law, but a principles document aimed at shaping policy. It urges developers to build systems that are safe and effective, protect against algorithmic discrimination, safeguard data privacy, explain how decisions are made, and offer human fallback options.  In practice, many of these overlap with civil rights and consumer protection laws – for example, employment laws already ban hiring discrimination, so an AI résumé screener must not discriminate by gender or race.  Unique to the Bill of Rights is its emphasis on “notice and explanation” of automated decisions, echoing emerging demands for transparency.  Crucially, the document is non-binding, meant to influence federal rulemaking or sector guidelines rather than impose new criminal penalties.

Alongside, NIST’s AI Risk Management Framework (AI RMF) provides an American blueprint for AI governance.  Published in 2023, the RMF is a voluntary, consensus-based guide to embedding trust (“security, reliability, fairness, transparency”) into AI development. It is meant for all sectors (from finance to healthcare to defense) and aligns with other standards (like NIST’s cybersecurity framework or ISO 42001).  Unlike regulations, however, the AI RMF has no legal teeth – it is a tool for companies to follow best practices.  In short, in the U.S., “ethical AI” largely means complying with existing laws (civil rights, consumer protection, privacy) and ad hoc executive-branch guidance.  State and city laws are also proliferating (for example, some U.S. cities have banned government use of facial recognition).  But there is no single federal mandate like Europe’s – yet.  In fact, some legal scholars argue that companies today face a “liability squeeze”: U.S. courts have begun holding AI vendors and users accountable for biased outcomes under general laws (see Mobley v. Workday, which holds a vendor liable under employment law) even as vendor contracts seek to disclaim all responsibility.  That dissonance means U.S. companies can’t simply “outsource” ethics – they must police their AI or risk being the ones sued for discrimination.

Beyond laws, industry groups and nonprofits publish “AI ethics principles,” and some firms adopt internal review boards or toolkits.  But as a Human Rights Watch report and others note, implementation is patchy.  For example, the EU’s new Digital Services Act (DSA) requires online platforms to moderate harmful content with AI, but even firms like Meta have been caught censoring more content than even regulators intended, unevenly suppressing political speech.  In short, in practice, American “ethical AI” is a mix of traditional legal compliance (don’t lie, steal data, or discriminate) and voluntary use of standards such as NIST and ISO for additional governance.

From Labs to the Street: Big Models vs. Legacy Systems

“Ethical AI” challenges cut across the new and the old.  At the bleeding edge are the giant foundation models – OpenAI’s GPT, Google DeepMind’s (LaMDA/PaLM) systems, Anthropic’s Claude, etc. These models boast breakthroughs in language, vision, and decision-making, but they also amplify ethical quandaries. They “hallucinate” answers, sometimes giving confident falsehoods; they leak copyrighted text or memorized data; and they can inadvertently generate hate or bias, despite content filters. The data that feeds them is vast and poorly documented: much scraped from the web in bulk, raising concerns about consent and licensing.  (The recent uproar over OpenAI’s new voice model “Sky,” which sounded uncannily like Scarlett Johansson without her consent, shows how even sexy demos can trip over ethics and IP rules.)  Internally, the tech industry has seen turmoil as these issues come to a head. OpenAI, which branded itself a responsible AGI pioneer, faced an abrupt leadership purge and resignations when board members and safety researchers clashed with executives focused on rapid product rollout. Google’s AI lab likewise saw two AI ethics co-leads, Timnit Gebru and Margaret Mitchell, pushed out amid controversies over censorship and biased research.  Those scandals underline a reality: building “ethical” large-scale AI in secretive corporate labs is fraught. Checking for bias or privacy issues at scale is technically challenging, and there’s no formal regulatory approval process for releasing a new model (yet).

By contrast, traditional enterprise AI – the often rule-based or narrower machine-learning systems used in HR, finance, marketing, and surveillance – has a longer history of scrutiny, but still plenty of problems.  We now know many “profiling” systems in hiring, credit, policing, and beyond can encode societal biases unless carefully managed.  A famous case is Amazon’s now-abandoned résumé-screening AI: it “taught itself” to penalize women’s resumes because past applicants were mostly men.  Even after engineers tweaked it, Amazon quietly shelved the project, realizing the logic could always find new proxies for gender and entrench discrimination.  That story illustrates a key point: ethical AI requires both high-quality data and sound processes. Enterprises have built AI models on historical data (resumes, loan records, criminal records), often without scrutinizing the biases in those datasets.  Investigations by journalists and regulators highlight the real stakes.  In mortgage lending, for example, The Markup found that Black and Latino applicants with similar finances were 40–80% more likely to be denied home loans than white applicants.  A U.S. Senate bill proposal cited this research, noting that opaque credit algorithms “expose the public to major new risks from flawed or biased algorithms.”  In employment, a 2026 lawsuit against the AI vendor Eightfold alleges that its candidate-ranking scores function like “consumer reports” under U.S. law, meaning applicants should have had access to and challenged them.  In surveillance, Clearview’s case is emblematic: for years, it sold face-recognition technology built on scraped Facebook and Twitter photos, without user consent, until regulators in Europe and elsewhere deemed it illegal and invasive.  These examples show that in every sector – hiring, lending, policing, healthcare – AI systems must be vetted at every step or risk becoming costly ethics fiascoes.

Corporate Ethics Fails - Promises vs. Practic

When companies trumpet “ethical AI,” the real-world follow-through has been spotty.  Some established internal ethics boards, only to see them collapse.  Google’s stumbles are a cautionary tale: its ethics team was effectively hamstrung when Gebru and Mitchell were removed amid debates over research freedom.  OpenAI’s experiment with an external board ended in resignations and secrecy, undermining trust. Even companies with glossy ethics statements have gotten burned: IBM and Microsoft draw some praise for their transparent policies, but others have been accused of “ethics washing” – using grand principles while avoiding real oversight.  For instance, Clearview AI publicly insisted it was building a benign tool for law enforcement, yet was secretly flouting privacy norms until multiple countries intervened.  These failures share common themes: lack of independent review, hidden data practices, and a tendency to prioritize rapid product deployment over caution.  By contrast, firms that succeed with ethical AI integrate oversight into their operations (e.g., regular algorithm audits, diverse testing groups, clear accountability channels).

In practice, the biggest hurdles are often institutional.  Technical teams may not know how to measure “fairness”; legal teams face novel questions (e.g., is an AI screening tool a “consumer report”); and risk managers are playing catch-up.  Contracts with AI vendors usually favor the vendor: a recent legal analysis found that 88% of AI suppliers cap their liability to small fees and disclaim any compliance warranty.  Meanwhile, regulators are signaling companies should not rely on such shields.  Under the Mobley precedent, both the employer and its AI vendor were held responsible for the decisions of a biased hiring system. Yet the vendors’ fine print often forces the buyer to defend the vendor in any discrimination lawsuit. The result is a “liability squeeze” – companies must assume responsibility for systems they barely understand.  The practical upshot: businesses must fortify their AI governance as a legal defense.  Industry lawyers now advise implementing robust bias-testing protocols, explainability audits, human-in-the-loop controls, and detailed impact logs– essentially embedding legal compliance into the technical lifecycle.

No magic wand exists: turning ethical intent into action takes resources and tough trade-offs. The complexity of modern AI means surprises are inevitable – models can drift, data sources can change, and laws can lag. The companies that successfully navigate this build cross-functional teams (combining legal, compliance, data science, and ops) to constantly monitor outcomes.  They demand transparency from vendors (even if that means difficult negotiations on audit rights) and stay abreast of evolving standards (like the new ISO 42001 or sector-specific regs).  They also communicate clearly with the public by clearly labeling AI-generated content, documenting reasons the certain for certain decisions, and providing appeal processes.  All these measures show that “ethical AI” isn’t a checkbox – it’s an ongoing program of governance and improvement.

Ethically Sourced vs. Ethical AI (Summary)

It is worth reiterating: ethical AI ≠ ethically sourced AI, though both are intertwined. Ethical AI focuses on how AI systems act and are managed: mitigating bias, safeguarding rights, ensuring explainability, and taking responsibility for outcomes. Ethically sourced AI is about the provenance of your data and models: using only data collected with consent or proper licensing, respecting privacy and IP laws, and maintaining transparent supply chains for all AI components. An “ethical” AI system must satisfy both domains.  You cannot call an AI model ethical if it was trained on stolen or private data, even if its outputs are benign – and conversely, a model built on well-sourced data can still act unethically without the right safeguards.  In practice, leading organizations are recognizing data ethics and model ethics as two halves of a whole.  They insist on data lineage documentation as strongly as fairness tests, and on human oversight as strongly as legal compliance.

The Bottom Line

“Ethical AI” today is not a buzzword but a promise backed by law and risk. Europe is already enforcing it through strict regulations, while in the U.S., companies face court rulings and evolving guidance. The hype of AI innovation collides with the hard realities of bias, privacy, and accountability. When promises of ethics fail, we see costly failures (from Cambridge Analytica and Clearview to Amazon’s discarded recruiter and Google’s silenced ethicists).  The companies that navigate these challenges do so by internalizing ethics: using standards such as NIST and ISO as blueprints, redesigning processes for transparency, and being prepared to adjust course when problems arise. In the end, truly ethical AI means putting real guardrails around AI – not just in glossy statements but in every line of code, contract, and corporate decision.

About the Author

Markus Brinsa is the Founder & CEO of SEIKOURI Inc., an international strategy firm that gives enterprises and investors human-led access to pre-market AI—then converts first looks into rights and rollouts that scale. As an AI Risk & Governance Strategist, he created "Chatbots Behaving Badly," a platform and podcast that investigates AI’s failures, risks, and governance. With over 30 years of experience bridging technology, strategy, and cross-border growth in the U.S. and Europe, Markus partners with executives, investors, and founders to turn early signals into a durable advantage.

©2026 Copyright by Markus Brinsa | SEIKOURI Inc.