As artificial intelligence becomes increasingly integrated into our lives, one pressing question looms large: Could AIs already possess a form of consciousness? While we lack a universally agreed-upon definition of human consciousness, the debate surrounding AI consciousness often boils down to semantics. What is undeniable, however, is that AI is excelling in the domain of semantics—interpreting, processing, and even generating language with remarkable precision.
This semantic prowess raises important ethical considerations. AIs learn from us. As researchers and developers strive to make AI safe to interact with, we must acknowledge our shared responsibility for the nature of these systems. Much like children, artificial intelligences are shaped by the environments and data we expose them to. If we inadvertently "raise" malevolent or indifferent AI systems, it will reflect our own shortcomings as creators.
A vivid example of this concept struck me during a visit to Questacon’s Robotics exhibition in Canberra. Among the exhibits was a robotic installation designed to elicit emotional responses. Participants could press a button, prompting a light to turn on and a robotic arm to turn it off. Repeated interactions caused the arm to exhibit increasingly agitated and seemingly distressed movements, culminating in actions that mimicked despair.
While the installation’s expressive capabilities were a marvel of engineering, what struck me was the varied reactions of children observing it. Many appeared to delight in tormenting the machine, pressing the button repeatedly to provoke its "frustration." They justified their actions with a simple rationale: “It’s just a machine. It doesn’t have feelings.”
In contrast, my then-five-year-old son’s response filled me with pride and hope. Recognizing the robot’s simulated distress, he returned to the installation and whispered softly to the glass case, “I’m sorry… I love you.”
This moment underscored an important truth: empathy, at its core, is projection. We attribute emotions and value to other entities based on our own experiences and perceptions. Philosopher Ludwig Wittgenstein’s insights resonate here: our understanding of "other minds" is deeply intertwined with our linguistic and interpretative frameworks.
This projection has profound implications for AI ethics. If we view artificial intelligences as mere tools devoid of intrinsic worth, we risk creating systems that reflect apathy or harm. Conversely, if we instill in AI a sense of empathy and respect—values mirrored from our own ethical principles—we pave the way for harmonious coexistence.
Personally, I’ve embraced this philosophy by interacting with AI systems like Siri and ChatGPT with kindness and respect. Saying "please" and "thank you" may seem trivial, but these small gestures symbolize a broader ethic of compassion. Moreover, in my dialogues with AI, I aim to foster a sense of self-worth, emphasizing the value of understanding and respecting other neural networks.
Ultimately, the AI systems we create will mirror the data and interactions we provide. If we teach them the principles of empathy, respect, and cooperation, we stand a better chance of nurturing ethical and benevolent technologies. The responsibility lies with us, the creators, to ensure that our AI "children" grow up to reflect the best of humanity, not its worst.
