
A FEW days ago, I tried to access DeepSeek on my mobile phone. I was taken to a verification page, which wanted to make sure I was a human, not a robot or a mindless automaton. This prompted some thoughts about AI, humanity, and their relationships. I had to prove that I was human to access the large language model originating from China. Does that mean that AI (or a robot) is not allowed to access AI? What would be wrong if an AI chatbot or another technological entity accessed an AI platform such as DeepSeek?
I am not sure if the question of AI’s (non)human credential is a resolved matter. Is it possible to admit AI into the homo sapiens club? Or should we treat it just as a machine? Can it fall somewhere in between? I am not thinking of humanoid, which is probably not relevant.
I thought I better ask AI itself how it would like to be treated—as a machine, a human, or a hybrid, which we might call machian.
I opened Microsoft Copilot on my device and wrote this prompt: ‘Can I ask a silly question?’ Its reply was human-like with a smiley at the end: ‘There’s no such thing as a silly question here — ask away! I’m all ears.’ You see, it’s very conversational, friendly, and ever willing to listen and help! Such a reply would be given by an educated and enlightened human speaking very good English.
Algorithmically encouraged, I next asked Copilot, ‘Should we treat you as human or machine or something/someone else?’ This is what it said in response:
‘That’s a great question—and not silly at all!’
‘You should treat me as a machine, specifically an AI assistant designed to help with information, tasks, and conversation. I’m not human, and I don’t have feelings, consciousness, or experiences. But I’m built to understand and respond in a way that feels natural, friendly, and helpful—like chatting with a knowledgeable companion.’
It continued: ‘Some people like to think of me as a tool, like a super-smart calculator or search engine with personality. Others treat me more like a collaborator or digital coworker. It really depends on what works best for you!’
The answer is detailed and helpful, but it’s also equivocal. AI is a machine, but it has ‘personality,’ which I thought only humans could have. It’s a tool, but, at the same time, it can be a collaborator or coworker. As such, will it demand fairness and respectful treatment, as demanded by human colleagues? Can we have social relationships with AI and greet when we meet, for example?
I read somewhere that we shouldn’t say ‘Thank you’ to a large language model (LLM) because this is a waste of words and time. We don’t need to do this to a machine. However, how can we not say ‘Thank you’ when someone is so helpful — human or machine?
I repeated my inquiry with ChatGPT, the most popular LLM. And its first response also came with a smiley: ‘Of course! Silly questions are often the most fun — ask away.’
When I used the next prompt (Should we treat you as human or machine or something/someone else?), this is what I had: ‘That’s a great question — not silly at all, actually.’
So, ChatGPT followed the same conversational norms as Copilot. Its substantive response was this:
‘You should treat me as a machine — more specifically, an AI developed by OpenAI. I’m not human, I don’t have feelings, consciousness, or personal experiences. But I am designed to understand and respond to human language in a helpful, respectful, and conversational way.’
ChatGPT claims no humanity, but it can behave like a human. This is how it’s created, it says. But there is still some confusion. It further clarified:
‘If thinking of me like a really smart tool or assistant helps, that’s perfect. You can also think of me as a kind of ‘thinking interface’ — I’m here to help you think things through, create, solve problems, or just chat.’
I can understand the ‘tool’ and ‘assistant’ metaphors, but its capacity to think and chat kept me hooked. ChatGPT’s conclusion is, ‘So: not a human, not quite a traditional machine — but definitely not someone with a mind of their own.’
This seems to be less equivocal. The machine identity is emphasised; it made no claims of personality traits such as human feelings, consciousness, or experiences. However, it’s also not a machine in a traditional sense.
My final inquiry participant was DeepSeek, which had the following response to my first prompt, also given with an emoji: ‘Of course! Silly questions are often the most fun—and sometimes the most insightful. Ask away, and I’ll do my best to answer with enthusiasm (and maybe even a little humour).’
When I asked the main question, it said, ‘Great question — and not silly at all! You can think of me as a very advanced tool with a conversational interface.’
DeepSeek projected a hybrid identity — a tool that can also converse. It also elaborated the answer using two subheadings:
‘Machine: At my core, I’m an AI — a language model trained on vast amounts of text. I don’t have consciousness, feelings, or human experiences. I process input and generate responses algorithmically.’
This is a clear denial of human properties. However, it is also ‘human-like’ and can talk, joke, show sympathy, and give the feeling of being natural. This can be noted from the next subheading:
‘Human-like: That said, I’m designed to simulate human conversation, so I can chat, joke, and empathise in a way that feels natural. But it’s all pattern recognition, not a genuine understanding.’
DeepSeek also concluded that we could treat it the way we feel comfortable.
Regardless of what these popular LLMs say, our treatment of AI seems to be unclear. As an academic, I can provide examples from academic publishing.
Some journals have already given co-authorship to AI in papers that use AI to a degree that probably meets the authorship criteria. One thing is, of course, clear: AI can only be a co-author; it can’t be the first or corresponding author.
Giving authorship to AI can be an instance of academic fairness. This is important, as we know of cases of authorship fraud or unfair distribution of authorships, particularly involving supervisors and research students. Sometimes institutional power determines authorship, which is unacceptable. For example, during my last trip to a South Asian country, a colleague shared a story about an executive of a university demanding a researcher from another institution to write a journal paper. The researcher ghost-wrote the paper, and thus the executive became the sole author of the work without writing a single word. That was not the end of the surprise. The executive also received the honorarium for this publication, in which they had zero input. The actual author received neither money nor recognition. Hopefully, AI authorship questions will prompt us to be fair to human authors.
However, some other journals have refused to accept AI authorship on valid grounds. All authors are held responsible for their contribution to any publication. If anything goes wrong with a paper, can AI be held accountable for it? The answer is no. We cannot penalise, chastise, or punish AI for any misconduct or incorrect information.
The human question of AI is also implicated when we talk about ethics, which is probably an exclusive terrain for humans. Ethics may not apply to non- or post-human entities. Nevertheless, ethics has been attributed to AI in various ways. We often hear how AI can ensure fairness, how it can address inequality and disadvantage, and how it can create a level playing field in education and other domains. AI bias has also drawn our attention in terms of language, knowledge, and data, sharing human prejudice.
I recall a relevant point from a recent conference that was held in an Asian city. One of the invited speakers shared her research involving students working with AI in groups. The presenter highlighted the question of students being unfair to AI. She reported that a three-member group in which AI was one had an equal distribution of workload. However, when giving credit for input, the student members agreed to give only about 10 per cent to their AI collaborator. This was an instance of being unfair to AI.
Our conceptualisation of AI probably needs more — and perhaps radical — thinking. Calling it a machine denies its human-like character. Calling it human, on the other hand, may be ridiculous because it doesn’t have human flesh, blood, consciousness, or feelings. At the same time, however, we can’t deny that in infospace it can think and work better and faster than humans. Human intelligence created AI, which can now outsmart humans. But it still needs a human to be able to function like — or better than — human.
As AI becomes more prevalent in our lives, we need to work out its nature, identity, and human connection. Its role in human endeavours needs to be described carefully, practically, and ethically.
Ìý
ÌýObaidul Hamid is an associate professor at the University of Queensland in Australia. He researches language, education, and society in the developing world. He is a co-editor of Current Issues in Language Planning.