Abstract
As Artificial Intelligence becomes integral to human creative processes, “centaur creators” – human-AI collaborations – raise complex questions around authorship, bias, and accountability. These teams enhance productivity and innovation, but the AI components frequently reproduce the structural biases embedded in their training data (Agrawal, Gans and Goldfarb, 2018; Eckert, 2023). When such biased outputs form part of a creative or inventive work, it becomes necessary to ask: who deserves credit, and who bears responsibility?
This paper argues that, even when AI plays a significant generative role, legal and ethical attribution must remain with the human collaborator. Existing intellectual property (IP) frameworks deny authorship or inventorship to non-human agents (U.S. Copyright Office, 2023; Thaler v. Vidal, 2022), but these frameworks require refinement to address the nuanced realities of AI-assisted creation. Through case studies – including the Théâtre D’opéra Spatial copyright denial and the DABUS patent litigation – we examine how biased creativity in centaur teams disrupts current attribution doctrines.
Targeted reforms are proposed: clarifying thresholds for human authorship, mandating disclosure of AI involvement, and embedding bias mitigation into standards for IP attribution. The paper ultimately advances a human-centred model of responsibility—one that ensures collaboration with machines does not erode the moral and legal foundations of authorship.
INTRODUCTION
Artificial Intelligence (“AI”) has evolved from a passive tool to an active participant in fields traditionally defined by human creativity. In so-called “centaur” arrangements, human users frame problems and objectives, while AI systems offer generative outputs based on patterns in training data (Hao, 2023). These hybrid collaborations raise a pressing question: when bias is embedded in the final product, who is the author – or even the accountable party?
AI systems like DALL·E 2 and GPT-4 have been documented reproducing and amplifying historical stereotypes. For instance, DALL·E 2 frequently depicts corporate leaders as white men, even more disproportionately than real-world statistics would justify (Leswing, 2023; Eckert, 2023). Similarly, language models have exhibited biases in gender, race, and cultural framing. These patterns are not random – they reflect skewed or exclusionary training data. Despite AI’s growing generative power, IP law still refuses to recognize AI systems as authors or inventors (Thaler v. Vidal, 2022; U.S. Copyright Office, 2023). As a result, human creators remain the sole rights-holders and the only legally accountable agents.
This legal stance is consistent with broader questions of moral agency: AI cannot exercise intent, take responsibility, or respond to the harms its outputs might cause. Yet its growing influence in content creation challenges the coherence of attribution frameworks that presume an entirely human origin. What happens when a biased output is shaped more by the machine’s correlations than the human’s intent? And how can legal systems preserve human-centred attribution without ignoring the machine’s role?
This paper argues that both credit and responsibility must remain with the human partner. Even as AI becomes more sophisticated, its outputs lack moral judgment, contextual understanding, and legal standing. The human must act as the ethical filter, interpretive authority, and final decision-maker. AI’s role must remain subordinate, both functionally and legally.
To make this argument, the paper proceeds in five structured parts. Part I presents a focused literature review on centaur creativity, the sources and manifestations of algorithmic bias, and the current treatment of AI-generated content under copyright and patent law. Part II develops a normative framework, drawing on legal doctrine, ethical theory, and the philosophy of bias, to argue that the human in the loop must remain both the legal author and the moral agent – even when biased AI suggestions shape the output. Part III examines illustrative case studies: the Théâtre D’opéra Spatialartwork denied copyright, the DABUS patent applications rejected worldwide, and examples of biased outputs that complicate attribution. Part IV offers policy recommendations for reforming IP frameworks, including clearer inventorship criteria, mandated disclosures of AI involvement, and bias mitigation protocols. Part V concludes by proposing future research directions and legal reforms that balance innovation with accountability, ensuring that AI-augmented creativity remains ethically grounded and legally navigable.
LITERATURE REVIEW
A. Human–AI Creative Synergies (“Centaur Creators”)
Modern AI systems, particularly large-scale machine learning models, have transformed how creative work can be done. Unlike earlier tools, these AI models do not merely follow human instructions but can generate new content – text, images, designs – in a way that mimics creative processes. This has led to the rise of human–AI collaboration in creativity, often described using the centaur metaphor introduced above. The underlying idea is that humans and AI have complementary strengths. As Hao (2023) observes, from an epistemological perspective, humans possess “common sense” and a strong grasp of causation, which makes people adept at framing problems and assigning values or goals. AI, in contrast, excels at detecting correlations and patterns in vast data, making it powerful at generating candidate solutions or ideas – what Hao calls “solution prediction”. In an ideal centaur team, each party does what it is best at: the human sets the creative direction or poses the question, and the AI offers numerous possibilities or insights that a human alone might not have conceived. When this synergy works, the result can indeed be “1+1 > 2” – achievements beyond the reach of either a lone human or standalone AI.
A classic illustration of such synergy comes from the world of chess: human-computer “centaur” teams were found to outperform even the strongest computer chess programs for a period, because the human could guide the computer to explore creative strategies and override any machine follies (Hao, 2023). In the realm of innovation, the collaboration between human scientists and AI algorithms has begun yielding real breakthroughs. A frequently cited example is DeepMind’s AlphaFold, an AI system that predicts protein structures. Human researchers supply AlphaFold with well-framed problems (proteins of interest), and the AI generates structural predictions that are “novel” and “surprising”, sometimes achieving insights comparable to experimental discoveries (Jumper et al., 2021). Such outcomes validate the centaur concept by showing how AI can act as a creativity multiplier for human experts.
Yet, researchers also caution that AI’s brand of “creativity” is fundamentally different from human creativity in important ways. Felin and Holweg (2024) argue that current AI’s mode of operation is backward-looking – it relies on statistical patterns in existing data – whereas human creativity is often forward-looking, grounded in causal reasoning and imagination beyond the data. AI can recombine and extrapolate from what it has seen (and in doing so, it may inadvertently stick to or exaggerate familiar patterns), but it lacks the human ability to reason about why things are and to envision truly new possibilities not implied by prior examples. In short, AI is imitative and probabilistic, whereas human creativity can be innovative and theory-driven. This insight is crucial, because it helps explain why biases present in the AI’s training data can surface in its creative outputs: the AI is essentially drawing from the statistical status quo. Humans, on the other hand, can intentionally break from the status quo. They can introduce originality or ethical judgment that isn’t present in the data – for example, deliberately avoiding a cliché or bias. Thus, while centaur teams hold great promise, they also require active human oversight to ensure the outputs are not just abundant and novel, but also, aligned with human values and free of undesirable biases.
Literature in management and technology studies reinforces the importance of the human role in guiding AI. Agrawal, Gans and Goldfarb (2018) note that generative AI systems, at their core, remain “prediction machines” and warn that without proper context and judgment, their outputs might be misused or misinterpreted. They emphasize that human judgment is the critical element for success with these tools. In other words, an AI can supply raw creative material or suggestions, but it takes human discernment to curate, refine, or reject those suggestions. This underscores a theme that will recur in our analysis: even in centaur creativity, the human should ideally be the ultimate arbiter of the final creation, taking responsibility for steering the creative process – including responsibility for any biases that might emerge.
B. Bias in AI Systems and Outputs
Bias in AI systems is neither incidental nor easily separable from the technology itself. Most generative AI models are trained on vast datasets scraped from the internet, including social media, books, news, and other human-generated content. These datasets reflect longstanding societal prejudices and imbalances. When such data are used without correction, the models internalise and replicate those patterns. As a result, AI systems often reproduce and even exaggerate human biases. In the case of DALL·E 2, for example, prompts like “CEO” return images of white men nearly 97% of the time – significantly more than actual demographic distributions in corporate leadership (Leswing, 2023). Eckert (2023) found that such biases extend to race, gender, age, and socioeconomic status across a wide range of generative outputs.
This is not just a reflection of past inequality – it becomes a mechanism for perpetuating it. Generative systems are designed to produce the statistically most probable completion of a prompt. But if the training data overrepresents certain identities or perspectives, those become the default outputs. In other words, bias is embedded in what the model understands as “normal.” Worse, the high fluency and visual polish of AI outputs make these biased results appear legitimate. A biased sentence sounds convincing when grammatically perfect; a biased image looks authoritative when photo-realistic.
Bias in AI is particularly dangerous in centaur teams because the human collaborator may trust or internalise the system’s suggestions. A study by Sharot et al. (2024) found that participants working with an AI system that undervalued women’s performance began to replicate those biases in their own judgments – even when the AI’s input was removed. In creative tasks, this leads to a troubling feedback loop: the AI generates outputs based on biased training data, the human selects from them assuming neutrality, and both reinforce each other’s skewed judgments. Over time, the centaur becomes less of a check on bias and more of a channel for reinforcing it.
Moreover, bias reduces the quality and range of creative output. If an image-generation model renders only Eurocentric beauty standards or produces “default” characters as male and white, then it narrows the spectrum of representation. Similarly, in inventive domains, AI systems may overlook problems that do not dominate global patent data. For example, a drug discovery model trained on Western pharmaceutical literature may ignore diseases disproportionately affecting the Global South. This is not a failure of intelligence – it is a failure of imagination, caused by the narrowness of the dataset.
Agrawal, Gans and Goldfarb (2018) remind us that AI systems are, at base, “prediction machines.” They produce what is most statistically likely, not what is just, fair, or original. Left unchecked, this tendency turns centaur teams into echo chambers of the status quo. This has ethical and legal implications. If a centaur team creates an artwork, ad campaign, or invention that reflects harmful bias, the AI cannot be held accountable – but the human can, and should.
The upshot is clear: AI systems must be understood not as neutral tools, but as artefacts of human choices – choices about data, design, and deployment. In creative collaboration, humans must actively anticipate and mitigate bias. It is not enough to guide the prompt or curate the output. The human must critically reflect on how and why AI produced what it did, and what values that output serves or harms. Bias mitigation, then, is not just a technical safeguard – it is part of the creative responsibility that comes with authorship.
C. Authorship and Inventorship in the AI Age
Copyright law in most jurisdictions – including the United States, the United Kingdom, and the European Union – firmly insists on human authorship as a prerequisite for protection. This requirement stems from foundational principles in copyright theory: that authorship entails not only original expression but also legal personhood and moral responsibility. The U.S. Copyright Office reaffirmed this position in its 2023 Policy Statement, clarifying that copyright protection applies only to works created by humans, not machines (U.S. Copyright Office, 2023). This insistence reflects more than a doctrinal commitment to tradition – it protects the enforceability of rights, the logic of authorship, and the attribution of responsibility.
This principle has come under pressure with the rise of generative AI systems. These models, trained on massive datasets and capable of producing fluent, polished content, now generate entire books, artworks, videos, and music tracks. In creative fields from journalism to design, such outputs may comprise the bulk of a final product. Yet, if no human “authored” that material in the conventional sense – by writing or designing manually – the law denies protection. This tension is visible in a series of high-profile registration rejections, where the Copyright Office has drawn hard lines between works “created by humans” and those created “by machines.”
A key example is the Midjourney-generated image Théâtre D’opéra Spatial, submitted for copyright registration by Jason Allen. Allen described his process as one of extensive prompt engineering, curation, and aesthetic refinement. However, the Copyright Office denied protection for the image itself, reasoning that the expressive elements originated from the Midjourney system and were therefore not authored by a human (U.S. Copyright Office, 2023). It granted copyright only for Allen’s textual description and minor post-processing. The Office emphasized that mere prompt creation – however complex – does not amount to authorship unless the human meaningfully shapes the final expression.
This decision reflects a broader concern: if machines are producing the expressive content, and humans merely initiate or select it, can we meaningfully speak of authorship at all? The Office’s approach aligns with earlier decisions, such as Naruto v. Slater (2018), in which a macaque’s selfie was deemed uncopyrightable because the monkey lacked legal personhood. Courts held that even if the camera and setting were arranged by a human, the absence of human volition at the moment of creation voided the claim. The same logic now applies to AI: unless a human exercises creative control over the expression, the work is not eligible for copyright.
The Office’s position has been reinforced in other AI-generated submissions. In the Zarya of the Dawn case, a comic book illustrated with Midjourney was submitted for registration. The Copyright Office granted protection only for the textual narrative and for the selection and arrangement of visual elements – explicitly excluding the AI-generated imagery itself. The ruling clarified that the inclusion of AI-generated images does not invalidate the entire work, but that protection is limited to the aspects meaningfully shaped by a human.
This piecemeal approach – protecting the human contributions while denying the AI’s – preserves doctrinal consistency but introduces ambiguity. In centaur collaborations, the division between human and AI input is often blurry. A human may guide the AI through iterative prompting, choose among dozens of outputs, and apply light editing. Does this amount to authorship? Under current doctrine, likely not. But as tools like Adobe Firefly, Runway, and DALL·E integrate into professional creative workflows, this standard becomes increasingly difficult to apply.
The tension intensifies when outputs are published or monetised. If a media company uses generative AI to create promotional visuals, who holds the rights? If no copyright can be claimed, do the works enter the public domain? And if those works include biased or offensive content, who is legally accountable? These questions expose a doctrinal and regulatory gap: the law excludes AI as a rights-holder but offers little clarity on the thresholds for human authorship in collaborative settings.
Moreover, the exclusion of AI-generated content from protection has real commercial and ethical implications. Many creators rely on copyright not only for attribution but for revenue, enforcement, and recognition. When their AI-assisted outputs are denied protection, they may lose control over their work – or, conversely, face liability without the corresponding benefit of ownership. This is particularly problematic when the output contains harmful bias or infringement. Without clear attribution rules, courts may struggle to identify who should be held responsible.
Some legal scholars propose addressing this tension by adjusting the authorship standard. Instead of requiring direct human creation, the law could consider whether the human exercised meaningful creative judgment over the final output. This includes selecting, modifying, or arranging the AI’s suggestions. It mirrors the doctrine applied in collage, editing, or curatorial practices – where the artist may not have created every element but shaped the overall expression. Under this framework, a centaur creator would qualify as an author if they demonstrate curatorial intent and aesthetic discretion.
Others recommend a disclosure-based approach. Rather than excluding AI-generated works outright, copyright registration could require creators to disclose the extent of AI involvement. Protection could be granted on a proportional basis, reflecting the balance of human and machine input. This model parallels the treatment of collective works or works for hire, where multiple contributors are acknowledged, and rights are divided accordingly. Such reforms would preserve the human-centric foundation of copyright while adapting to the realities of AI-assisted creation.
However, even with reform, one boundary remains essential: authorship must attach only to legal persons capable of intent, interpretation, and liability. An AI system, however advanced, lacks the ability to make meaning, assume responsibility, or respond to infringement. It cannot be credited because it cannot be accountable. For this reason, the law must continue to treat the human – not the machine – as the ultimate source of authorship. The creative act is not merely the generation of material but the selection, shaping, and ethical stewardship of that material. Without human oversight, there can be no author – and therefore no copyright.
In the context of bias, this principle becomes even more important. If an AI system produces a racist caricature or gendered stereotype, and the human collaborator publishes it without correction, they cannot disclaim liability by blaming the machine. The legal fiction that holds humans responsible for outputs they disseminate is essential not only to doctrinal coherence but to ethical accountability. If we relax this standard, we risk creating a loophole through which bias becomes untraceable – generated by a system, approved by no one, but still capable of causing harm. In centaur creativity, the human must remain the ethical and legal filter. Attribution and accountability are two sides of the same coin.
ARGUMENTATION
A. The Attribution of Creativity in Centaur Teams
When a human and an AI collaborate to produce a creative work, to what extent should each be attributed credit for the result? The position emerging from legal precedent and theoretical reasoning is that the human member of the centaur team should be treated as the primary – and legally, the sole – creator. This stance is grounded in both practicality and principle. Practically, only humans can hold intellectual property rights and responsibilities at present. An AI cannot own a copyright or patent, nor can it be held liable in court for infringement or defamation. Principally, as discussed earlier, IP law is rooted in human-centric justifications: encouraging creativity, rewarding labour, and respecting the personal connection between authors and their work – a relationship reflected, for instance, in the moral rights regime under European copyright law.
From an attribution standpoint, then, the default should be that the human receives credit for the creative output of a centaur team. This remains true even where the AI contributes the bulk of the content, provided the human exercises at least minimal creative control sufficient to meet the legal threshold of authorship or inventorship. However, this raises a potential moral hazard: could a human claim full credit for a work substantially generated by AI, particularly if the work contains flaws or bias? A cynical scenario is one where a human uses an AI to generate an artwork or solve a technical problem, provides little further input, and still asserts full authorship or inventorship – essentially using the AI as an uncredited ghostwriter or research assistant. If the work is praised, the human accepts the credit; if it is later criticised, perhaps for embedding racial stereotypes or plagiarising training data, the human may deflect by claiming, “the AI did it.”
Permitting such a split – credit without responsibility – would be untenable. As a matter of fairness and policy, credit and responsibility must go hand in hand. The human who benefits from AI-generated creativity must also bear accountability for its defects, including any embedded bias. This is analogous to how a film director is credited for the final movie and is also held responsible for its content, even though many contributors – screenwriters, cinematographers, editors – played a role. The director cannot publicly disavow responsibility for a controversial scene by blaming the screenwriter; courts and audiences alike will view the final product as the director’s. Similarly, a centaur creator cannot disclaim a biased or harmful output by saying “that part came from the AI.” If an AI-generated passage in a book contains a derogatory stereotype and the human chooses to include it in the final manuscript, they bear full responsibility for that decision.
Philosophically, some have questioned whether AI deserves recognition for its creative contribution. The academic debate over whether GPT-4 should be credited as a co-author of a research paper it helped draft exemplifies this (Interesting Engineering, 2023; Science.org, 2023). But major scientific journals have answered firmly: AI cannot be listed as an author because it “does not deserve the credit” and cannot assume responsibility. This aligns with a deeper view of authorship and inventorship: these roles confer not just recognition, but obligations – such as disclosing conflicts of interest, ensuring the accuracy of representations, or abiding by rules of novelty and best mode in patent law. Since an AI cannot fulfil these duties or possess intent, assigning it formal credit would hollow out the meaning of attribution. In the context of biased creativity, the human must be the responsible agent.
That said, acknowledging the AI’s role is still important for transparency and ethical clarity. A human creator should disclose significant AI assistance – both to maintain honesty and to help contextualise any emergent bias. For example, an author might note in the preface to a novel: “Portions of this book were generated with the assistance of an AI tool, which may reflect biases in the training data.” This does not undermine the author’s legal authorship, but rather, provides readers with a fuller understanding of the creation process. Many academic journals now require such disclosure precisely to maintain trust. One can imagine similar norms developing across industries – perhaps akin to food labels that inform consumers about ingredients or health risks. A future “AI-assisted” label might carry not just transparency, but a signal of ethical diligence on the part of the creator.
Importantly, the burden of bias remains with the human. If an artwork co-created with AI is criticised for consistently depicting light-skinned individuals in powerful roles, the artist cannot point to the AI’s training set and walk away. The artist chose the tool, selected the outputs, and incorporated them into the final piece. This social accountability mechanism creates incentives for centaur creators to engage critically with AI outputs. If one expects to be credited – and blamed – for the results, one has a strong reason to ensure the output aligns with ethical standards and personal values. Deflecting accountability onto the machine would erode those incentives and widen the existing “responsibility gap.”
In conclusion, the human member of the centaur team should be recognised as the true creator for purposes of both legal credit and moral accountability. The AI, no matter how sophisticated, is still a tool – akin to a paintbrush, camera, or algorithm. Just as we do not credit a camera with authorship when its lens introduces distortion, we do not credit the AI when it introduces bias. We credit, and hold responsible, the human who selected the tool and finalised the work.
B. Bias, Creativity, and the Quality of Output
Bias poses not only ethical concerns but also affects the quality and originality of creative outputs. A biased work – whether an artwork, invention, or design – may be judged as inferior due to its predictability, one-sidedness, or failure to resonate with diverse audiences. A novel that relies on flat stereotypes may be seen as artistically weak; an AI-generated solution that works only for a narrow demographic may lack practical utility. Therefore, mitigating bias is essential not just for fairness, but for producing high-quality, valuable creative content.
From the standpoint of creativity theory, originality often arises from breaking out of conventional patterns. Biases in AI models are, in effect, codified conventions – statistical summaries of historical data. When AI systems offer the most statistically probable continuation of a prompt, they risk delivering unoriginal or clichéd outcomes. This is the paradox: the same mechanisms that cause biased outputs also make those outputs creatively dull. For centaur creators, genuine creativity often means identifying when the AI offers a stereotypical or biased suggestion – and deliberately choosing something else. Some artists report “fighting” AI defaults by tweaking prompts or choosing anomalous outputs, much like a writer rejecting a cliché. The creative act, in this sense, lies not in the AI’s generation but in the human’s capacity to filter and redirect.
Occasionally, bias in AI may spark creativity by functioning as a visible constraint. Just as perspective in painting shapes a scene from a particular angle, the AI’s inherent bias can offer a constrained viewpoint. A human artist might choose to subvert that viewpoint – to remix Eurocentric medieval imagery into a postcolonial critique, for example. Here, the bias becomes a part of the conversation, not a flaw to be corrected. The philosopher’s view – that confronting bias can lead to truth – applies: the centaur creator can craft something meaningful by understanding and responding to the machine’s limitations (UCL, 2024).
Still, relying on bias as creative fuel is context-specific and risky. In functional domains like invention or legal drafting, bias is seldom an asset. Ideally, AI systems should offer diverse, inclusive options, expanding the design space for human creators. In such an ideal centaur workflow, the AI would surface wide-ranging possibilities, and the human would curate based on judgment, ethics, and originality. The present reality, however, is far from this. Generative models often default to biased, stereotyped outputs, particularly in fields like portraiture, narrative fiction, or product design (Leswing, 2023; Eckert, 2023).
This places greater responsibility on the human collaborator to act as a creative editor. Bias mitigation becomes a defining part of the human role. Hao (2023) refers to this as “value assignment” – the human’s task of selecting what is useful, ethical, and original from among the AI’s outputs. If the AI produces 100 results, the human adds value by filtering out the 99 that are biased, banal, or nonsensical. This form of curation is not a passive act; it is itself a creative process.
The implications of bias extend into intellectual property law. In copyright, the originality threshold is low: any independently created work with minimal creativity qualifies. Bias does not affect this unless the work is so automated that it lacks human authorship altogether. In patent law, however, quality is linked to the standard of non-obviousness – an invention must not be obvious to a person having ordinary skill in the art (PHOSITA). As AI becomes ubiquitous, legal scholars are asking whether obviousness should be redefined with AI in mind: if AI can generate a solution easily, is it still “non-obvious”? If so, the standard for patentability may shift (Abbott, 2020). AI systems are likely to produce solutions that lie along statistical paths – conventional, expected, or widely documented. The truly inventive step may lie in the human’s capacity to leap outside that distribution.
Abbott’s provocation – that “everything is obvious” – captures this concern. If AI systems are good at surfacing what is common or derivative, then the non-obvious may increasingly mean the non-statistical – the unpatterned, the counterintuitive, or the socially disruptive. And these are precisely the kinds of outputs most likely to be missed by biased systems. Thus, overcoming AI’s bias is not just a matter of ethics – it may become a condition for qualifying as an inventor.
Bias, then, emerges as both an obstacle and a clarifier. It narrows the scope of outputs and dulls their edge, but it also highlights why human oversight remains essential. The human can recognise the shallow imitation, spot the missing perspectives, and redirect the process toward novelty, balance, and insight. In doing so, the human justifies their legal and moral authorship. They add the value the machine cannot. Ironically, the presence of bias in AI reinforces the necessity of human involvement. Only a human can see it, judge it, and steer the creative act away from it.
C. Philosophical Perspectives: Bias, Truth, and Creativity
The challenge of bias in AI forces a deeper inquiry into the nature of truth and creativity. Bias, in one sense, distorts truth by presenting a partial or skewed version of reality. Creativity, meanwhile, often involves expressing truths – whether emotional, social, or imaginative – through a particular perspective. But when part of the “creator” is a machine trained on human data, how do these ideas intersect?
The article “AI and Biases – An Inquiry into ‘Truth’” argues that addressing bias is not merely a technical challenge – it is a philosophical one. Bias arises because all models, whether computational or human, simplify complex realities. To model is to reduce: to emphasise certain features and ignore others (Pamidighantam, 2023). In this light, bias is not the opposite of truth but a lens that frames it. Even in human creativity, bias is present in style: a painter chooses a palette, a novelist favours certain themes. These are conscious aesthetic biases. The problem with AI is that its biases are unconscious, inherited from data, and applied without awareness. The AI doesn’t know it’s privileging one view over another; a human artist typically does.
This philosophical stance leads to a nuanced view: all creativity reflects a point of view. What we call “bias” in AI is a perspective we did not choose. Integrating AI into creative workflows, then, requires intentional human direction. Instead of letting the AI generate default outputs, the human must shape its point of view. A filmmaker using AI for storyboarding might fine-tune it on certain aesthetics or apply clear thematic constraints. In doing so, the AI becomes an extension of the human’s artistic intent. The risk of bias is reduced because the perspective is no longer accidental – it’s aligned with the creator’s vision.
From an ethical perspective, one might draw on Kantian philosophy. In a Kantian frame, humans must be treated as ends in themselves, not as means to some external purpose. While AI is not sentient and thus not an “end,” the human certainly is. If we allow AI to generate outputs unchecked and then blame the AI for flaws, we invert the proper order: we treat the machine as responsible and the human as accessory. This is backward. The AI is a tool – a means – and the human is the end. Therefore, it is the human’s moral duty to use the tool in service of ethical and creative goals, which includes overseeing and correcting biased outputs.
Philosophical questions of justice also arise. If an AI system generates a biased hiring recommendation, a discriminatory image, or a racially skewed dataset, who is accountable? Since AI lacks agency, justice must be done through its human collaborators. Responsibility may be informal – public criticism of a biased artwork – or formal, as in lawsuits involving biased AI tools in employment or credit scoring. In both cases, attributing authorship and inventorship to the human ensures that there is someone who can be held accountable. This link between credit and blame is essential to any just system of creativity and innovation.
A natural question follows: would these arguments shift if AI became vastly more advanced – if it achieved general intelligence and demonstrated self-directed creativity? Possibly. At that point, debates about AI personhood or legal agency might gain traction. But in 2025, no such systems exist. Current generative models, no matter how impressive, are not sentient. They process data and generate outputs based on probability, not intention. Ethical and legal frameworks must reflect this reality: AI remains a tool, not a moral actor.
In that context, bias in AI is not a stance but a reflection. AI holds up a mirror to its training data, which in turn reflects human society. The human creator must take responsibility for filtering that reflection. The AI cannot distinguish harmful stereotypes from valid generalisations – it has no concept of fairness or harm. The human does. Therefore, the centaur creator must act as the moral interpreter of the machine’s raw material.
Pamidighantam (2023) suggests we treat bias not simply as an error to be deleted but as a phenomenon to be understood. When an AI persistently produces biased results, it reveals something about the training data and, by extension, about the society that produced it. For example, if a language model disproportionately associates leadership with masculinity, the model is not being malicious – it demonstrates a pattern embedded in its inputs. A thoughtful creator might use this to create work that critiques or highlights the imbalance. Here, bias becomes a source of insight. The AI’s flaw becomes a springboard for the human’s commentary.
In less abstract terms, the philosophical lesson is this: AI bias is human bias, amplified. The creativity is shared, but the intention and judgment are not. If AI creates something biased, it is still humanity that must take the credit – or the blame. The machine reveals patterns, but it is the human who decides whether to reproduce, correct, or subvert them. In this way, the centaur model underscores human accountability. Bias does not dilute authorship – it clarifies it.
D. Legal and Policy Implications: Balancing Innovation with Accountability
Given that we conclude humans should receive both credit and blame for AI-assisted creations, how should law and policy evolve to reflect this principle while still supporting innovation? This section explores key implications and reforms that bridge the argumentation to actionable recommendations.
A primary implication is that intellectual property law may require recalibration to avoid the “authorship/inventorship gaps” identified earlier. On one hand, valuable AI-assisted outputs should not be excluded from protection, or it may discourage investment in creative and technical innovation. On the other, the legal framework must ensure that humans retain meaningful control. In copyright, this might mean setting clearer guidelines for the quantum and nature of human input required to qualify as an author. The current threshold is ambiguous. In the Théâtre D’opéra Spatial case, merely crafting prompts was deemed insufficient. However, courts or legislatures could clarify that prompt design, curation, and iterative refinement – when reflecting personal judgment – can constitute “sufficient creative control.” Such a standard would not only secure protection but would also incentivize critical human engagement, encouraging creators to evaluate AI outputs rather than publish them wholesale.
In patent law, Yuan Hao’s proposal offers a promising reform: redefine “conception” to include what he terms “constructive conception” (Hao, 2023). Under this test, a human who (1) defines the problem, (2) recognises the significance of the AI-generated solution, and (3) discloses the AI’s contribution could qualify as an inventor. This approach acknowledges that while the AI may perform the technical generation, the human still plays a critical interpretive and supervisory role. Adopting this standard would keep legal inventorship human-centred while recognising the increasingly collaborative nature of centaur invention. It also promotes transparency: disclosing the AI’s role helps future reviewers assess the integrity of the invention – especially important when bias in training data could have affected the result (e.g., a pharmaceutical that works better for certain populations due to skewed data).
Another legal dimension concerns liability. If AI-generated content causes harm – such as copyright infringement, defamation, or biased hiring recommendations – the responsibility should lie with the human user or deploying entity. This aligns with our normative stance that credit and accountability must be paired. Many AI platform terms of service already reflect this by assigning legal risk to users. Going forward, laws may codify the principle that using AI does not shield one from liability. Statutes might affirm that liability flows through AI tools to the human controller, ensuring consistent responsibility across jurisdictions.
Antitrust and competition policy may also come into play. If AI-generated content is denied IP protection due to insufficient human input, it may flood the public domain. While this could expand the commons, it might disadvantage individual creators who rely on IP monetisation. On the other side, if corporate users secure IP for highly automated outputs, they could saturate the market with low-cost content or incremental inventions, crowding out human creators. A middle ground might involve graduated IP protection – perhaps shorter patent terms or narrower scopes for inventions heavily dependent on AI. Although no jurisdiction currently applies such a rule, such frameworks could help balance incentive structures between human effort and automation scale.
At the policy level, promoting ethical AI design becomes paramount. If human creators are to bear legal and moral accountability, they must have access to AI tools that are transparent, controllable, and auditable. Governments or industry groups could establish AI certification standards – such as “bias-resilient” labels – particularly for AI systems used in sensitive creative sectors. While such initiatives may sit outside IP law, they directly influence the quality of creative outputs and the legal exposure of human authors. A publisher might, for example, choose only to use writing tools certified for fairness, reducing the risk that a biased AI-generated narrative leads to reputational or legal harm.
Finally, moral rights doctrines may need updating. Typically, authors have rights to attribution and integrity – being named as creators and objecting to derogatory modifications. With AI assistance, these rights become fuzzier. Some suggest non-binding norms of disclosure: not naming the AI as a co-author, but noting its use (“Created using Adobe Firefly”) as a matter of transparency. This can build trust, educate audiences, and help trace the origin of errors or bias. Disclosure already occurs in some contexts: in the Colorado State Fair art contest, Jason Allen openly stated he used Midjourney, setting a constructive example. Similar practices could become standard across publishing, academic, and creative platforms.
In conclusion, preserving a human-centric model of credit is essential – even as AI takes on more generative tasks. Bias makes this especially urgent: since only humans can assess and mitigate bias, only they can be held responsible for its inclusion or exclusion. Rather than strip humans of credit when things go wrong, the law should encourage deeper engagement. Legal frameworks can support this by adjusting standards of authorship and inventorship, imposing reasonable liability, and incentivising transparency. By aligning rights with responsibilities, and innovation with accountability, we can support the development of AI-assisted creativity while upholding the ethical and legal values that underpin human authorship.
CASE STUDIES
To illustrate these concepts, we examine a few case studies where AI’s role in creation and the issue of bias have tested the allocation of credit and responsibility.
A. Case Study 1: AI-Generated Art and the Question of Authorship
Figure 1: AI-generated artwork “Théâtre D’opéra Spatial” (2022) – Created by Jason M. Allen using the Midjourney generative AI model.

This image won a fine art competition, sparking controversy because the primary visual content was generated by AI, not painted by Allen’s own hand. The U.S. Copyright Office later refused to register the image for copyright, concluding that Allen’s creative input – text prompts (over 600 iterations) and minor digital edits – was insufficient and that the dominant expression was machine-produced, thus falling outside the bounds of protectable human authorship (U.S. Copyright Office, 2023; Wikipedia, 2024).
This case exemplifies the “authorship gap”: an original-looking piece of art ends up without a recognised author under law. Allen’s case raises not only legal but ethical and social questions. Suppose, hypothetically, the image reflected subtle bias – such as portraying only women in passive roles. Who would be accountable for that bias: Allen, as the human user, or the Midjourney system, which drew from biased training data? Legally, Allen was denied authorship over the AI-generated portions, but the algorithm also holds no rights or responsibilities. The result is a rights vacuum: the image is in the public domain, and no one is liable in a strict legal sense.
Yet in public perception, Allen was clearly treated as the creator. When the work drew criticism from traditional artists, it was Allen – not the AI – who was faulted for “claiming” to be the artist. Had the image contained harmful stereotypes or plagiarism, it is Allen who would have faced public backlash. This illustrates a practical reality: legal authorship may be denied, but social attribution persists. The person who submits the work, displays it, or sells it is seen as its author, and audiences assign responsibility accordingly.
Interestingly, Allen was transparent about his process. He disclosed the use of Midjourney to the contest organisers, and the judges affirmed they would have awarded him the prize even knowing AI was involved (Wikipedia, 2024). In that sense, Allen received informal artistic credit – even as legal credit was denied. This tension between formal law and informal recognition is now driving policy discussions. Proposals include new designations like “AI-assisted artwork” or copyright protection for the creative curation or arrangement of AI outputs. For now, though, Allen’s case demonstrates that when humans use AI, they may reap the reputational benefits and still shoulder the ethical burdens, even without legal ownership.
The implications for bias are especially revealing. If AI tools introduce biased content – be it racial, gendered, or plagiaristic – that content becomes part of the work presented to the world. In Allen’s case, there was no evident harm, but other AI-generated works have exhibited problematic patterns. For instance, some image generators have produced overly sexualised or homogeneous depictions. In such cases, the responsibility still lies with the human user. One illustrator faced criticism after using AI-generated content that inadvertently plagiarised prior art. The AI couldn’t be blamed – so the illustrator had to apologise and withdraw the work. This reinforces the broader claim: even if the law does not recognise a human author, society does. And if one asserts creative credit, one must also be prepared to bear the blame, especially when bias is involved.
B. Case Study 2: AI-Generated Inventions and Inventorship (DABUS)
The DABUS cases represent a landmark in the evolving debate over AI and inventorship. DABUS, developed by Stephen Thaler, is an AI system credited with autonomously generating two inventions: a food container design and a flashing light device. Thaler submitted patent applications naming DABUS as the sole inventor, arguing that the AI independently conceived the inventions and that, as the AI’s owner, he should receive the patents as assignee. Patent offices in the United States, United Kingdom, European Union, and other jurisdictions rejected the applications, holding that inventors must be natural persons (Polsinelli, 2023; Cardozo Law Review, 2023). In the U.S., the Federal Circuit stated unequivocally: “only natural persons can be ‘inventors’” under current patent law. As a result, no patent could issue.
This case is instructive in showing how the absence of a recognised human inventor leaves a legally unfillable void. Because DABUS was listed as the inventor, no patent could be granted. If bias had been implicated – say, the invention exhibited design preferences due to training data – there would have been no legally accountable inventor to answer for it. While DABUS’s outputs may not reflect social biases, algorithmic bias could manifest in structural or aesthetic patterns. Still, in the absence of human oversight, the law denies protection altogether, leaving both credit and accountability in limbo.
A hypothetical variation brings the issue into sharper focus: if an AI-generated pharmaceutical functions poorly in women due to gender imbalance in training data, but no human inventor is listed, who is accountable? If a patent had been granted to a human recognising and disclosing the invention, regulatory authorities could scrutinise the process, demand data, or impose penalties. Without a patent – and thus no official disclosure – the invention may become a trade secret or simply remain unclaimed, complicating efforts to trace origins or impose responsibility. This undermines both transparency and public health oversight.
These challenges have prompted reform proposals. Yuan Hao’s concept of “constructive conception” offers a viable alternative. He argues that a human should be considered the inventor if they (1) pose the problem, (2) recognise the significance of the AI’s output, and (3) disclose the AI’s role (Hao, 2023). This model preserves the legal requirement of human inventorship while acknowledging the growing role of AI in scientific discovery. Under this rule, Thaler or a human collaborator could have been credited, allowing the patents to proceed, and ensuring someone is responsible for compliance with disclosure and utility standards. It also incentivises transparency: documenting the AI’s contribution would become standard, helping regulators and users assess whether algorithmic bias may have skewed the output.
From a policy perspective, DABUS illustrates how far the law will go to uphold the principle of human agency – denying rights altogether rather than recognising machine-generated creativity. This comes at a cost: reduced incentives for AI-driven innovation and missed opportunities for knowledge dissemination. If the law instead allowed human inventors to be credited for their supervisory and interpretive roles, we could preserve the accountability structure while continuing to reward and regulate inventive output. Moreover, having a human inventor on record is essential if issues arise – someone must be answerable. An AI, no matter how sophisticated, cannot testify, revise an invention, or disclose biases.
Ultimately, DABUS underscores both the rigidity and the rationale of current IP systems. The insistence on human inventorship preserves legal coherence and ethical accountability, but it risks excluding AI-generated breakthroughs from formal protection. Reforming the rules to include constructive human involvement would allow such inventions to be patented without compromising on the need for human responsibility. Doing so aligns with the core insight of this paper: where AI plays a role in creation, a human must still be credited – and blamed – because only a human can meet the demands of law and justice.
C. Case Study 3: Bias in AI-Generated Imagery (Hugging Face Study on Stereotypes)
In 2023, researchers at Hugging Face and Leipzig University conducted a systematic study to investigate bias in popular text-to-image models such as Stable Diffusion and DALL·E 2. Their findings revealed that prompts for high-status professions (like “CEO” or “director”) overwhelmingly generated images of white men – up to 97% of outputs in DALL·E 2’s case (Leswing, 2023). Prompts for roles like “nurse” or “teacher” often yielded female depictions. These outputs did not merely mirror real-world demographics – they amplified them, revealing entrenched biases embedded in training data (Eckert, 2023).
Unlike the previous case studies, this example isn’t about one creative work but about patterns across outputs. It illustrates how a human creator relying on generative AI might unwittingly reproduce stereotype-laden imagery. For instance, a marketing team using an AI model to illustrate different job roles for a company website might enter generic prompts (“an engineer at work,” “a CEO giving a speech”) and accept the top results. The result: a gallery of homogeneous images lacking diversity. Even if the creators did not intend to convey bias, the audience would likely attribute the imagery – and its implications – to the human team or organisation. Saying “we let the AI decide” is unlikely to absolve responsibility.
This study thus acts as a cautionary tale: centaur creators must not disengage from editorial responsibility. Diligent users can guide the model with diversified prompts or actively curate results to ensure representation. As the researchers note, diagnosing model bias is only the first step – conscious design and use are required to mitigate it (Eckert, 2023). Some AI developers have responded: OpenAI’s DALL·E 3 includes a default “diversity filter,” inserting racial and gender variation when the user does not specify characteristics (Brookings Institution, 2024). These fixes, however, are still evolving and not foolproof.
From a credit and accountability perspective, the issue is clear. If a children’s book or public exhibit features only light-skinned characters generated by AI, it is the human author or publisher who is criticised – not the AI. The audience has no access to the prompt history or technical details; they respond to what they see. The AI is invisible. Therefore, in both public perception and legal exposure, the human creator owns the bias. This parallels potential liability scenarios in domains like hiring. If a firm uses an AI tool that produces racially skewed candidate pools, anti-discrimination law may hold the firm accountable – even if the output was generated algorithmically.
The Hugging Face study also highlights the risk of a feedback loop. If biased outputs are widely published and consumed, they can become part of the cultural record – and the training data for future AI models. Thus, uncorrected biases today can be amplified tomorrow. This recursive pattern makes human intervention not just desirable but urgent. Breaking the cycle requires active human curation at the point of creation, not just post hoc reflection.
In sum, the Hugging Face research offers broad empirical support for the thesis advanced in this paper: bias in AI is pervasive and predictable, and only human users can take responsibility for detecting and correcting it. The “credit” for biased AI-generated imagery, both moral and social, goes to the humans who decided to use, publish, and endorse it. AI may produce, but humans must curate.
D. Case Study 4: Human Bias Amplified by AI (UCL Experiment)
This final case study, though based on experimental research rather than a real-world creative project, offers compelling evidence of how human-AI collaboration can amplify existing biases. In a 2024 study conducted at University College London, Sharot et al. investigated whether biased AI feedback could shape human judgments. Participants were asked to perform perception and categorisation tasks – such as judging whether faces looked happy or sad – while receiving suggestions from an AI assistant trained on the biased responses of earlier participants (Sharot et al., 2024). The findings were striking: when the AI showed a skew (e.g., leaning toward “sad” judgments), the human participants were more likely to follow that bias – and continue doing so even after the AI interaction ended.
This study provides a realistic analogue for creative collaboration. Suppose a writer co-authors a script with an AI that subtly favours male scientist characters. The writer, exposed to these patterns, may unconsciously lean further into gender bias in future drafts. The human becomes more biased not despite the AI but because of it. This creates a feedback loop: a human adopts and amplifies the AI’s inherited skew, producing content that reinforces stereotypes.
In such scenarios, who deserves the “credit” or the blame? The UCL study shows that while the AI initiated the bias, the human internalised and enacted it. In public view, if the final creative product is riddled with bias, accountability typically falls on the human. Audiences will criticise the author, not the invisible AI collaborator. Thus, the ethical obligation lies with the human to be alert to the AI’s potential distortions and to override or correct them. The experiment highlights a crucial point: centaur creators must not treat the AI as a neutral oracle. They must lead the collaboration, not follow it blindly.
It also reinforces the need for training and awareness. In the experiment, participants were not warned that the AI might be biased. In real-world applications, however, users must assume fallibility. AI systems are statistical tools, not moral agents. Centaur creators should approach AI output with scepticism and critical reflection. Otherwise, the synergy turns toxic: a passive human working with a biased AI can produce worse outcomes than a thoughtful human working alone. This is a perverse and avoidable result. Only when the human half of the centaur leads with judgment and intent do the advantages of collaboration manifest.
Practically, the study suggests clear recommendations: institutions and companies deploying AI in creative or decision-making processes should implement bias-awareness training for users and routine bias audits for the tools. Where bias is detected, human collaborators should be trained to identify and correct it – much like pilots learning to handle the quirks of an aircraft’s autopilot. The point is not to eliminate AI but to use it consciously. The responsibility for final output remains with the human, both in law and perception.
This study reinforces a central claim of this paper: maintaining fairness and originality in AI-assisted work is a human task. The AI may introduce bias, but it is the human collaborator whose changed judgment ultimately shapes the result. Accordingly, the credit for avoiding bias – and the blame for failing to – is rightfully theirs.
Together, the four case studies explored in this section illustrate one consistent conclusion: humans remain the centre of attribution in AI-assisted creativity. Whether it is an artwork generated with Midjourney, an invention sparked by DABUS, a biased portfolio of AI images, or cognitive drift induced by AI collaboration, the legal and moral weight falls on the human side. Law may not yet fully recognise human authorship or inventorship in all these settings, but public perception already assigns both credit and accountability to the human. None of these cases support shifting attribution to the AI. Instead, they affirm the need to refine legal definitions to handle AI contributions, equip creators with tools to combat bias, and encourage transparency around AI involvement.
RECOMMENDATIONS
Having analyzed the interplay of bias, creativity, and credit in human–AI collaborations, the next step is to outline recommendations. These proposals aim to adapt legal frameworks and creative practices to ensure that human creators are appropriately credited and held accountable, while reducing the risks posed by biased outputs. The recommendations address both legal and policy reforms, as well as practical guidelines for creators and institutions.
A. Update IP Law to Recognize Human Contribution in AI-Assisted Works
IP law should evolve to ensure humans who meaningfully guide AI-generated creativity are recognised as authors or inventors. Current standards often leave such works unprotected, discouraging innovation and complicating accountability.
In copyright, lawmakers should clarify that activities like prompt engineering, selecting outputs, and editing AI content can qualify as human authorship – if they reflect creative judgment. The U.S. Copyright Office has started down this path (U.S. Copyright Office, 2023). Codifying a standard such as “human creative control equals authorship” would encourage active human engagement and ensure someone holds copyright for enforcement and licensing. The standard must, however, avoid rubber-stamping: passive use shouldn’t qualify. Claimants should show how they shaped or curated the AI’s output.
In patent law, we endorse Yuan Hao’s “constructive inventorship” proposal (Hao, 2023). A human should be named inventor if they: (a) define the problem for the AI, (b) recognise the value of the AI’s output, and (c) disclose the AI’s role. This approach aligns inventorship with actual human contribution, prevents inventions from falling through legal cracks, and maintains accountability.
Disclosure requirements should also evolve. Just as gene sequences or funding sources must be disclosed in some patent filings, applicants should declare AI involvement. This supports transparency and helps regulators later assess algorithmic bias – for example, if a drug proves less effective in certain populations due to skewed training data.
Together, these reforms would close the IP gap, preserve the principle of human credit, and promote responsible, bias-aware centaur creativity.
B. Implement Accountability Mechanisms for Biased AI Outputs:
Entities using AI for creative or content production should establish clear accountability systems to catch and address biased outputs. This includes internal review protocols where human reviewers vet AI-generated content for fairness before release. Legally, companies should be held responsible for AI-generated material just as they would for human-authored content. For example, if an AI system generates discriminatory loan language, the institution should face liability as if a staff member had written it. Such a rule could be codified in consumer protection or anti-discrimination law, removing “the AI did it” as a defense and reinforcing the need for scrutiny.
In creative industries like publishing, media, and advertising, industry associations can adopt codes of conduct for responsible AI use. These could require efforts to ensure diversity in AI-generated representations and the use of sensitivity readers or bias auditors for AI-assisted works. These measures promote public trust and affirm that humans remain in charge of ethical oversight – not merely passive recipients of algorithmic output.
C. Encourage (or Require) Disclosure of AI Involvement in Creative Works
Transparency supports both fair attribution and bias management. Creators should disclose significant AI involvement in their work. Academic journals already require such disclosures, typically in acknowledgments or methods sections when tools like ChatGPT assist in drafting or analysis. This norm could expand to books, news articles, and visual art, where a brief note could indicate if text or images were AI-generated or AI-assisted. Disclosure adds context and signals that human creators took responsibility for the AI’s contribution.
In some domains, disclosure might be legally mandated – for instance, in political ads or news content to combat misinformation. In the creative sphere, disclosure is more about intellectual honesty and public awareness. It helps audiences interpret works more accurately and fosters accountability. As AI use becomes routine, such notices may be as standard as ingredient labels – overlooked when uncontroversial, crucial when something goes wrong.
Disclosure should not diminish the human creator’s authorship or accountability. It is not a shield to deflect criticism. Rather, it affirms that AI was used under human supervision. Legal instruments – like publishing contracts – could clarify this: if AI is used, the human author warrants that they reviewed, edited, and stand by the final product.
D. Strengthen Bias Mitigation in AI Tools and Provide User Controls
AI developers should build bias mitigation features directly into creative tools and offer users control settings to guide outputs. For example, image generators could include a “diversity” mode that ensures demographic variation in people-related prompts – some systems already do this (Brookings Institution, 2024). Language models could offer options to flag or filter potentially biased phrasing before content is finalised.
This ties back to attribution: if tools are better calibrated from the outset, human collaborators spend less time correcting bias and more time contributing creatively. The human still earns the credit, but it becomes credit for creative input – not merely bias control. Making AI less biased by default helps ensure the human’s role is additive rather than corrective.
Policymakers can accelerate this by issuing technical standards through bodies like the National Institute of Standards and Technology (NIST) and encouraging procurement policies that favour AI tools with built-in fairness controls. This ensures that when humans use such tools, their outputs are more likely to be inclusive, accurate, and socially responsible – enhancing both the quality and credibility of centaur-generated work.
E. Monitoring Evolution of AI Creativity and Reassessing Credit Rules
While current AI systems remain tools shaped by human prompting and oversight, the rapid pace of development suggests that more autonomous forms of AI creativity could emerge. To prepare for this possibility, policymakers should continuously monitor technological progress and maintain flexibility in legal standards. Establishing a standing commission or advisory body on AI and intellectual property could help track how frequently AI contributes to creative and inventive work and assess whether existing credit rules remain adequate. If AI reaches a point where attributing authorship becomes viable, this would mark a paradigm shift with implications for legal personhood, incentive design, and cultural norms. Until then, human attribution must remain the default – but the framework should be revisited regularly to stay aligned with technological change.
To support this evolution, future research must address several underexplored questions. One area of inquiry is how to quantify the division of creative labor between humans and AI, and whether legal thresholds should be established to resolve borderline authorship or inventorship claims (Hao, 2023). Another involves developing reliable methods to audit bias in AI tools across media types – text, image, music – and exploring whether such audits should be legally mandated (Eckert, 2023). As new doctrines like constructive inventorship or authorship gain traction, research must also consider their impact on innovation incentives – such as whether they might encourage overuse of AI for quick IP gains, and how quality standards can be upheld. Additionally, some scholars have proposed awarding limited recognition to AI developers (via sui generis rights) for highly novel outputs produced by their models. Whether this would strengthen or undermine human-centric IP systems remains contested. Finally, comparative legal research is needed to understand how other jurisdictions – such as the UK’s recognition of “computer-generated works” where authorship is assigned to the human operator – might inform U.S. and international policy.
These questions call for interdisciplinary engagement between law, computer science, and ethics to ensure that credit, responsibility, and regulation evolve in step with AI’s capabilities. Forward-looking policy should preserve the primacy of human creators, while remaining agile enough to address the complex challenges emerging at the intersection of automation and creativity.
CONCLUSION
As AI assumes a larger role in creative and inventive work, the challenge extends beyond engineering into law, ethics, and human judgment. This paper has argued that while AI may generate content, it lacks the agency, intention, and accountability that authorship or inventorship requires. Credit for AI-assisted outputs – especially when they are biased or flawed – must rest with the human. Across case studies and doctrinal analysis, the throughline is clear: the human collaborator remains the one who frames the problem, selects from possibilities, curates the final form, and ultimately stands behind the result.
Current legal frameworks have not fully caught up. In copyright, there remains ambiguity about what counts as sufficient human input. In patent law, a rigid insistence on human inventorship risks leaving valuable inventions unprotected. Meanwhile, biased outputs from generative systems are already shaping culture and public discourse, often without adequate review or accountability. This paper has proposed a roadmap of reforms: clarifying standards for human authorship and inventorship, requiring disclosure of AI use, mandating accountability for biased outputs, supporting bias mitigation at the design level, and maintaining policy agility through ongoing monitoring and interdisciplinary research.
What ties these reforms together is the principle that legal credit should reflect moral and creative responsibility. The centaur model offers a compelling metaphor: AI may expand the boundaries of what is possible, but it is the human who gives direction and meaning. Law and policy must reinforce that hierarchy, not dilute it. With careful reform, AI can be integrated as a force multiplier for human creativity – without eroding the values of originality, fairness, and accountability that intellectual property seeks to protect.
In the end, the question of authorship in the age of AI is not only about law or liability – it is about meaning. To create is to assert presence in the world, to shape the raw data of experience into something that reflects judgment, intention, and value. AI can imitate, generate, and even surprise, but it cannot care. It cannot doubt, reflect, or be held accountable. Creativity, in its highest form, is a human act not because it excludes tools, but because it includes conscience. As we navigate this new era of hybrid authorship, the enduring task of the law is to trace the invisible line between assistance and agency – to ensure that wherever a spark of judgment is struck, the person who lit it remains visible through the smoke. In every centaur creation, the machine may help carry the brush, but it is still the human hand that paints the world.
BIBLIOGRAPHY
Books
Abbott, R. (2020). The Reasonable Robot: Artificial Intelligence and the Law. Cambridge University Press.
Agrawal, A., Gans, J., & Goldfarb, A. (2018). Prediction Machines: The Simple Economics of Artificial Intelligence. Harvard Business Review Press.
Academic Articles and Reports
Bianchi, F., Kalluri, P., Durmus, E., Ladhak, F., Cheng, M., Nozza, D., Hashimoto, T., Jurafsky, D., Zou, J., & Caliskan, A. (2022). Easily Accessible Text-to-Image Generation Amplifies Demographic Stereotypes at Large Scale. arXiv. https://arxiv.org/abs/2211.03759
Felin, T. and Holweg, M., 2024. Theory Is All You Need: AI, Human Cognition, and Causal Reasoning. INFORMS. Available at: https://pubsonline.informs.org/doi/10.1287/stsc.2024.0189 [Accessed 6 April 2025].
Hao, Y., 2023. The Rise of Centaur Inventors and the Concept of Constructive Conception. SSRN. Available at: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4519145 [Accessed 6 April 2025].
Pamidighantam, B. A. (2025). AI and Biases – An Inquiry into “Truth”. Bhrigu A. Pamidighantam. https://bhriguapamidighantam.com/2025/02/02/ai-and-biases-an-inquiry-into-truth/
Zhou, M., Abhishek, V., Derdenger, T., Kim, J., & Srinivasan, K. (2024). Bias in Generative AI. arXiv. https://arxiv.org/abs/2403.02726
News Articles and Online Publications
Eckert, S. (2023). AI image generators reinforce stereotypes, study finds. Business Insider. https://www.businessinsider.com/study-ai-image-generators-reinforce-stereotypes-2023-6
Leswing, K. (2023). AI tools portray CEOs as white men 97% of the time, research finds. Business Insider. https://www.businessinsider.com/ai-tools-portray-ceos-as-white-men-research-2023-6
Science.org. (2023). Top journals say AI can’t be an author. Science. https://www.science.org/content/article/chatgpt-authorship-debate
Interesting Engineering. (2023). Can ChatGPT be listed as an author? Journals weigh in on AI authorship. Interesting Engineering. https://interestingengineering.com/culture/ai-authorship-academic-publishing
Legal Cases and Government Documents
Naruto v. Slater, 888 F.3d 418 (9th Cir. 2018).
Thaler v. Vidal, 43 F.4th 1207 (Fed. Cir. 2022).
U.S. Copyright Office. (2023). Policy Statement: Registration Guidance for Works Containing AI-Generated Material. https://www.copyright.gov/ai
Webpages and Online Resources
Polsinelli. (2023). Federal Circuit Denies DABUS Patent Appeal. Polsinelli. https://www.polsinelli.com/dabus-case-summary
Wikipedia. (2024). Théâtre D’opéra Spatial. Wikimedia Commons. https://upload.wikimedia.org/wikipedia/commons/b/bf/Th%C3%A9%C3%A2tre_D%E2%80%99op%C3%A9ra_Spatial.jpg

Leave a comment