No AI at GCN Press

As a result of the rapid AI invasion, US book publishers are facing strong legal, financial, and ethical challenges. Legally, there are more than one hundred lawsuits related to AI companies’ practice of stealing millions of copyrighted books. Financially, the rise of “AI slop” (content regurgitated by chatbots and sold on Amazon)—and other factors—makes it harder for readers to find the best books written and published professionally. And, ethically, publishers must make thoughtful choices about whether and how to use AI tools.

In a recent article published by the Independent Book Publishers Association, the writer presented the diverse ways that many US book publishers are handling these challenges. One industry leader, expressed a common concern, as follows: “[Humans] are intentionally ceding our position as the most intelligent and creative beings on this planet to something synthetic that a handful of people have engineered.”

Thankfully, most professional publishers, like GCN Press, are refusing to use the technology for any creative aspect of writing, editing, or designing books and articles. However, some publishers are beginning to use AI tools for back office tasks, such as for accounting and for boilerplate emails and contracts.

GCN Press is a small, independent publisher of books and articles about Christian philosophy, and the theology of work, economics, and technology. We take our small efforts seriously.

First and foremost, we firmly believe that all work should express the character of God. This means that we want to work in a way that God has designed us to work—as human beings made in the image of a creative, relational, moral, rational, and loving God. That world view causes us to reflect seriously about the use—or avoidance of—any technology, including artificial intelligence. Profit and efficiency take a back seat to our moral values.

In this context, we want to make sure that those who read Work Matters and our books know our position on using AI. We realize that there are many types of AI, with each being designed for different purposes. So, what follows mainly pertains to GCN Press’s work in publishing.

Our AI Policy

Stated simply, we believe that humans—not algorithms—should write, edit, and publish books. GCN Press (A) will not publish any books, articles. or audiobooks generated or developed all or in part by artificial intelligence; (B) will not use AI for any element of our editorial, graphic design, publicity, or office work; (C) will not license our books to AI companies for any reason, thereby protecting the intellectual property rights of authors.

Therefore, you can trust that everything we publish will never involve the use of AI tools. We believe in humans more than algorithms.

Our Reasons

Some of our reasons pertain to the unethical business models that undergird generative AI companies, which reflect values that we do not wish to support. Other reasons pertain to not off-loading our God-given capacity for creative work and careful thinking to machines.

First, to train their large language models, most AI companies have stolen troves of copyrighted content—the hard work of journalists, authors, and publishers. This is comparable to how the Communists took over physical property (e.g., homes, farms, and businesses) during the first half of the twentieth century. When companies blatantly disregard laws that protect intellectual property, they undermine the ability of writers, artists, publishers, business leaders, engineers, and scientists to make a living from their work. There are now more than one hundred lawsuits in the US related to copyright infringement by AI companies. So far, the courts seem to agree with us: The AI company Anthropic recently had to pay $1.5 billion to compensate more than three hundred thousand authors and publishers for stealing their books.

Second, as we reported in a recent book review, AI companies are clearly prioritizing profit at the expense of any commitment to the proper treatment of their low-level employees, or to the already-evident dangerous impacts of their products on individuals or communities.

Third, AI companies are further consolidating more and more control over information and data. The “winner-take-all” approach leads to monopolies, which in turn undermines the competition needed for a healthy economy. Moreover, the AI company leaders increasingly gain a highly concentrated amount of power over what people will read, watch, hear, and think about.

Fourth, AI companies show little or no concern for pursuing factual truth. This will likely fuel the already serious problem of disinformation and misinformation caused by social media companies, with tragic consequences. For example, OpenAI’s video generation system, Sora, is making it difficult to know what is visually true or fake. It has already been used widely on social media as a tool for disinformation. According to NewsGuard, an independent company that audits the factual accuracy of AI chatbots like ChatGPT, found in April 2025 that, “the 11 leading chatbots collectively repeated false claims 28.18 percent of the time” and that these programs emit false information 41.51 percent of the time. These numbers are identical to previous audits, indicating no overall progress in curbing false information.” If you had a friend who lied to you 40 percent of the time, would you still trust him?

Fifth, AI programs will likely increase what Ross Douthat, a journalist who is a Christian, calls cultural decadence—the decline or stagnation of creativity and goodness in our culture. Neuroscientist Erik Hoel stated it this way:

We find ourselves in the midst of a vast developmental experiment. [The culture is] becoming so inundated with AI creations that when future AIs are trained, the previous AI output will leak into the training set, leading to a future of copies of copies of copies, as content becomes ever more stereotyped and predictable … Once again we find ourselves enacting a tragedy of the commons: short-term economic self-interest encourages using cheap AI content … which in turn pollutes our culture and even weakens our grasp on reality (The New York Times, March 29, 2024).

We at GCN Press encourage our readers to think carefully and become well-informed about AI technology, specifically through a scriptural lens, before they blithely accept it as the new normal. We hope our position gives you some food for thought. We have the freedom—in fact the moral responsibility—to think scripturally about whether and how to use it.

Next
Next

Norwegian Adventures