OpenAI is not out to get you: An editor’s perspective

If you use the Internet, you’ve encountered artificial intelligence (AI)—whether you realize it or not. For some, AI is a growing concern; for others, it’s just another topic for small talk at the office coffee machine.

Maybe your boss has suggested using ChatGPT to draft emails and speed up workflow. Maybe you’ve gritted your teeth while dealing with a chatbot when all you wanted was to talk to a human. AI is becoming more embedded in our daily lives, raising general questions: What is it? How is it changing our lives? And what does this mean for human development? Despite concerns, AI isn’t replacing human intelligence—it’s reshaping how we interact with each other and with technology.

It has been here longer than you think

AI didn’t just appear overnight. It’s been shaping technology for decades, with major milestones along the way. In 1951, Alan Turing designed an algorithm for playing chess, but computers at the time were not powerful enough to run it. Early AI models were used to power spellcheck and predictive text. More recently, sophisticated tools like Grammarly have emerged, an advanced spellchecker for refining grammar and clarity. Now, we have large language models (LLMs) like ChatGPT, which can provide users with cohesive text on virtually any topic in seconds.

I started wondering about LLMs a few years ago while editing a stream of computer science papers. The sheer complexity of the concept fascinated me—it seemed almost impossible. How could we design machines capable of understanding language, responding naturally, and even holding conversations? The idea of a classic AI takeover is right out of Stanley Kubrick’s 2001: A Space Odyssey (1968). We humans have been thinking about this for a long time.

Near the end of 2022, OpenAI introduced ChatGPT. What once felt like science fiction became a reality. OpenAI isn’t just another chatbot with pre-programmed responses—it can generate coherent text, answer complex questions, and mimic writing styles. The jump from traditional AI models to LLMs seems staggering to me (based on my limited understanding of the background processes). The biggest shift—exciting for some, unsettling for others—is that AI has moved beyond programmed responses. It is a system that can learn and adapt based on context and how we interact with it.

While LLMs are changing how we interact with technology, this isn’t the first significant shift we’ve experienced. Gen Xers can tell you first-hand what life was like before the Internet. Just as early search engines suddenly made knowledge more accessible, AI-powered systems are making language-related tasks—writing, research, communication—faster and (arguably) more efficient. But, as with past technological shifts, new tools come with new challenges.

The Ethics of AI

Just like the rise of the Internet, AI raises issues such as misinformation and ethical concerns. The need for responsible use is paramount.

I keep reflecting on how OpenAI is reshaping academia and how my colleagues in the classroom are adapting. Many are skeptical. When Google Scholar first emerged, academics were also skeptical. Would easy access to research encourage laziness? Would students stop engaging with primary sources? Now, similar concerns arise with AI. If students rely on ChatGPT to generate essays or summarize texts, are they genuinely learning and engaging with the material?

How do different disciplines perceive and use AI?

Some academic journals are now including AI guidelines defining its acceptable and unacceptable uses in research. Some allow AI-generated summaries to be used to organize research, while others restrict its use to proofreading. But where is the line? If AI cannot be credited as an author, does using it constitute plagiarism? And what about intellectual property rights when AI-generated content is trained on human-created work? As AI-generated text may inadvertently replicate copyrighted material, some publishers and journals have strict policies against AI-assisted writing.

Key research considerations:

  • AI can generate text that sounds scholarly but lacks factual accuracy. There have already been cases of AI-generated citations pointing to non-existent sources, undermining research integrity. It can also reinforce dominant narratives while marginalizing underrepresented perspectives.

  • Since AI is trained on existing data, it can inherit biases and inaccuracies. When researchers rely on AI-generated summaries or literature reviews without critical verification, they risk perpetuating misinformation, reinforcing one-sided perspectives, and overlooking key insights. Even when sources are referenced by AI, they might be incorrect, outdated, or nonexistent ("hallucinated").

  • Academia needs clear AI guidelines. Can it be used to enhance critical thinking rather than replace it? The real test isn’t whether AI can assist in research—it’s whether academia can harness it without compromising the integrity of knowledge generation and access.

Of course, there are other ethical concerns, such as who controls AI development and how we ensure responsible use. Using AI to generate text without proper attribution may violate ethical guidelines. Furthermore, plugging unpublished research, confidential data, or sensitive information into OpenAI poses risks. Another area of concern is how published information is used to train AI models, often without consent from the original creators. This issue of ethics and intellectual property rights will inevitably—necessarily—evolve as AI advances.

AI vs. humans: AI is not out to get you. Here’s why.

As an editor, I am often asked, “Aren’t you worried AI will take your job?” My short answer? No. AI is a tool, not a replacement. If it is used as a tool, it has the potential to help streamline the editing process while ensuring accuracy. If it is used as a replacement, the impacts will be palpable and most likely negative. As an editor, transparency is also key. My website indicates that my editing process relies on human editing while employing tools, e.g., Grammarly, to ensure editing accuracy. Similarly, authors must recognize that authenticity and integrity are essential to effective writing. In other words, writers are expected to write their papers, and editors are expected to edit them.

I remain confident in my profession that humans will always be tasked with helping other humans clarify the intention, meaning, and nuances of their writing. Editing isn’t just about fixing grammar. It’s about understanding tone, intent, and nuance—things AI struggles with. Language is deeply human. AI can assist with organizing ideas and revising sentences. It cannot replicate the discernment and expertise of a skilled editor. Based on how fluid the conversation can feel when using AI, it is easy to assume AI understands us, but at its core, it predicts the next word based on patterns. Human thought is based on more than patterns, isn’t it?

The takeaway:

  • AI might correct a sentence grammatically but miss the intended meaning.

  • AI can mimic writing styles but struggles with nuance.

  • AI could make a formal document too casual or vice versa.

  • AI doesn’t verify information. A human editor ensures accuracy.

Example

* A human editor would ask the author to confirm the meaning of “the time of travel.” The passage could mean what AI suggests regarding the timing of the trip. However, the author could also be referring to the duration of travel time, the length of time being away, etc.

Moving forward

I felt inspired to share these preliminary thoughts as I process how I plan to use—or avoid—this new technology in my work.

Am I going to ask ChatGPT to write my next novel? No.

Am I going to use OpenAI as an additional proofreading tool to enhance accuracy in my editing process? I’m undecided.

At the end of the day, AI isn’t out to get you. It lacks intention. But it’s here to stay, and how we adapt will shape the future. What matters most is maintaining our role as critical thinkers, ensuring that AI supports rather than replaces human creativity and knowledge.

Further reading

Britannica. (n.d.). AI and ethics: 5 ethical concerns of AI & how to address them. Retrieved from https://www.britannica.com/money/ai-ethical-issues

Edit Republic. (n.d.). Will AI replace proofreaders and editors? 5 reasons why it won’t. Retrieved from https://editrepublic.com/blog/will-ai-replace-proofreaders-and-editors-5-reasons-why-it-wont/

Harvard Business Review. (n.d.). Harvard Business Review. Retrieved February 1, 2025, from https://hbr.org/

HISTORY. (n.d.). In 1950, Alan Turing created a chess computer program that prefigured A.I. Retrieved from https://www.history.com/news/in-1950-alan-turing-created-a-chess-computer-program-that-prefigured-a-i

Kaebnick, G. E., Magnus, D. C., Kao, A., Hosseini, M., Resnik, D., Dubljević, V., Rentmeester, C., Gordijn, B., & Cherry, M. J. (2023). Editors' Statement on the Responsible Use of Generative AI Technologies in Scholarly Journal Publishing. The Hastings Center report, 53(5), 3–6. https://doi.org/10.1002/hast.1507

MIT Technology Review. (n.d.). MIT Technology Review. Retrieved February 2, 2025, from https://www.technologyreview.com/

OpenAI. (n.d.). Research. Retrieved February 1, 2025, from https://openai.com/research

PerfectIt. (n.d.). Can artificial intelligence ever replace human copy-editors? Retrieved from https://www.perfectit.com/blog/can-artificial-intelligence-ever-replace-human-copy-editors

Stanford Institute for Human-Centered AI. (n.d.). AI Index Report. Stanford University. Retrieved February 2, 2025, from https://aiindex.stanford.edu/

Next
Next

Writing About Writing