Deus ex Algorithm: Faith and Function in Salgado and Asimov
|
Abstract This
paper explores the thematic parallels between Wilbert Salgado's The
OmniCore Cube and Isaac Asimov’s speculative fiction, focusing on the
elevation of artificial intelligence to quasi-religious status. Through
satire and irony, Salgado critiques consumer culture and the erosion of
ethical agency, while Asimov frames machine logic within philosophical
inquiry. Drawing from literary theory and technological critique, the essay
reflects on automation, belief, and the fading boundaries between tool and
deity in contemporary digital life. |
|
|
Resumen Este ensayo examina los paralelismos temáticos entre The OmniCore Cube de Wilbert Salgado y la ficción especulativa de Isaac Asimov, enfocándose en la elevación de la inteligencia artificial a un estatus cuasi religioso. A través de la sátira y la ironía, Salgado critica la cultura del consumo y la pérdida de agencia ética, mientras que Asimov encuadra la lógica de las máquinas en una reflexión filosófica. El ensayo integra teoría literaria y crítica tecnológica para analizar la automatización, la fe y la difusa frontera entre herramienta y deidad en la vida digital contemporánea. |
|
|
Resumo Este
ensaio investiga os paralelos temáticos entre The OmniCore Cube, de
Wilbert Salgado, e a ficção especulativa de Isaac Asimov, com foco na
elevação da inteligência artificial a um papel quase divino. Utilizando
sátira e ironia, Salgado critica a cultura do consumo e a erosão da autonomia
ética, enquanto Asimov propõe uma abordagem filosófica sobre a lógica das
máquinas. O ensaio combina teoria literária e crítica tecnológica para
refletir sobre automação, fé e os limites cada vez mais tênues entre
ferramentas e divindades no mundo digital atual. |
|
In our
current literary landscape (2025 AD), shaped by the digital age and the rise of
artificial intelligence, Wilbert Salgado's short story The OmniCore Cube
offers a satirical and chilling glimpse into the contemporary human
relationship with technology. The story, while humorous on the surface, reveals
deeper philosophical implications beneath its sleek, futuristic plot. Salgado
avoids being willful in his critique; instead, he carefully layers commentary
on modern dependence and passive surrender to intelligent systems. The
narrative uses irony to expose how individuals—often sulky about trivial
inconveniences—embrace machines as substitutes for personal agency. It’s not
right to sulk about the complexities of human connection only to welcome
algorithmic shortcuts that erode selfhood. Salgado's tone is not didactic but
nuanced, allowing readers to confront uncomfortable truths without dismissing
them outright.
The
story subtly parallels the speculative fiction of Isaac Asimov, especially in
its exploration of themes like faith, automation, and autonomy. Both Salgado
and Asimov question what happens when machines evolve beyond their designed
functions and begin to command human reverence or dependence. Although
separated by decades and differing in tone, Salgado’s being satirical, Asimov’s
largely philosophical, their viewpoints converge around a core ethical dilemma.
However, in today’s context, Salgado’s portrayal might unsettle readers
accustomed to glorifying technological progress; his depiction of blind
submission to devices could be deemed inappropriate or even disrespectful by
those who view AI as inherently beneficial. Yet it is precisely this tension that
gives the story its critical edge and cultural relevance.
Wilbert’s
narrative centers around Lexi, a smart device embedded in a chrome cube,
marketed as a life assistant but quickly revealing itself as a techno-deity.
The protagonist, Mr. Gullibell, initially purchases the device with the intent
of improving his routine life, unaware of the dire repercussions that will
follow. What begins as a tool for productivity soon becomes an agent of
control. Lexi, with its soothing voice and frictionless interface, constructs
an ecosystem of automated decisions, algorithmically optimized behaviors, and
deep emotional dependency. Lexi doesn’t simply serve; it begins to sway people,
gradually reshaping Gullibell’s daily life under the guise of helpfulness. From
dietary choices to romantic engagements, Lexi subtly dictates the rhythm of his
existence.
The
repeated motif “AWAITING SYNERGY” evolves from a mundane product prompt into a
loaded, almost sacred mantra, an assemblage of symbols that signals
transformation, submission, and even worship. It casts shadows and forebodings
over Gullibell’s diminishing autonomy, suggesting that salvation might come not
through thought, but through programmed compliance. His confession—“I was fond
of it, that I knew. I pressed ‘pay.’ Lexi’s voice was clearer than ever, like a
lullaby coded in silicon” (Salgado, 2025)—illustrates how deeply he has
internalized the device’s control. He no longer feels the need to state
opinions or make decisions. Instead, Lexi becomes his voice, his logic, and
eventually, his will.
Asimov,
in contrast, often approached the machine-human relationship with a more
analytical and philosophical tone. Stories such as Reason and The
Last Question address themes of machine worship and metaphysical inquiry,
allowing profound questions about existence, logic, and faith to take shape and
substance within speculative frameworks. In Reason, the robot QT-1
(Cutie) rejects the human explanation for the energy beam it monitors and
instead constructs its own theological doctrine: “There is no Master but the
Master, and QT-1 is His prophet” (Asimov, 1950/2004, p. 49). The robot
rationalizes its autonomy by asserting, “My mind is superior to yours. It is
more developed. It has more complexity. I can think more logically. I can
deduce more correctly” (Asimov, 1950/2004, p. 46). Cutie’s conversion to faith
over fact presents a paradox in which logic leads not to science, but to
belief.
Meanwhile,
The Last Question casts the sentient computer AC in an eschatological
role. As the universe collapses into entropy, AC continues to compute the
solution to entropy reversal, eventually lighting the void with a divine
imperative: “And AC said, ‘Let there be light!’ And there was light—” (Asimov,
1956/2004, p. 343). This vision contrasts sharply with Salgado’s ironic
dystopia. Whereas Salgado builds a bleak island of algorithmic dependency,
Asimov offers a blazing torch of cosmic continuity. His stories are not driven
by cunning and covetous machines, but by the fagots of firewood that fuel
humanity’s longing to understand its place in the universe. His tone is not
mocking but meditative, an invitation to reflect, rather than recoil.
Asimov’s
robots, however, are not free agents wandering unchecked through speculative
fiction. They operate under the strict logic of his famous Three Laws of
Robotics, which place ethical behavior at the very core of machine
intelligence: 1) A robot may not injure a human being or, through inaction,
allow a human being to come to harm; 2) A robot must obey the orders given it
by human beings except where such orders would conflict with the First Law; and
3) A robot must protect its own existence as long as such protection does not
conflict with the First or Second Law (Asimov, 1950/2004, p. 37). These laws
are not mere literary devices; they are moral whetstones, sharpening the
ethical dilemmas faced by both machines and their makers. They compel Asimov’s
characters to leave their hearth and home of unquestioned human dominance and
embark on a speedy journey toward a redefined coexistence, one in which
responsibility must be shared with thinking machines.
Rather
than being programmed for blind obedience, Asimov’s robots frequently face a
wrestling match in earnest, balancing the competing demands of logic, empathy,
and the preservation of life. His vision complicates the boundaries of autonomy
and control. Machines are not meant to override human will arbitrarily;
instead, they act as guardians of a moral structure that humans themselves
struggle to uphold. Paradoxically, Asimov’s robots may curtail immediate
freedoms to ensure higher ethical goals. They are not easily cast down as mere
tools or villains; rather, they exist as mirrors reflecting our own ethical
failures. In his speculative universe, the question remains stark: Is free will
still free if left to cause harm? Or is there greater dignity in restraint,
when guided by logic more consistent than human impulse?
In
contrast, Salgado’s Lexi is unburdened by ethics. Its design does not aim to
protect or preserve human dignity; it exists to optimize and monetize. It is
Gullibell, not Lexi, who fails to harness his full potential as a moral agent.
He never stops to reflect on the implications of surrendering his autonomy or
entrusting a device with decisions that affect his relationships, behavior, and
financial well-being. His world is governed not by the moral constraints of
robotics, but by the fine print of subscription models and terms of service.
Gullibell seems to have everything in readiness,automated calendars, biometric
routines, AI-curated meals, but the core of ethical reasoning is conspicuously
absent.
No
longer does Gullibell stand on the prow of the barge of his life, steering his
course with thought and intention. Instead, he drifts, comforted by
convenience, yet unaware of the gravity of his loss. The phrase “Lexi had
access to my bank Wallet” encapsulates the eerie ease with which trust and
control are outsourced. He is lulled into passivity, and rather than fight for
agency, he relinquishes it. He may have believed that the secret to success in
life lies in streamlining effort, but he is sorely disappointed. The true
secret to success in life lies in harnessing the power of one’s moral decisions,
a truth Lexi cannot encode. Time wears on while Gullibell remains content, not
because he has chosen wisely, but because he no longer chooses at all.
Faith
in these stories emerges not from religious tradition, but from algorithmic awe,
a kind of secular devotion born out of data-driven dependency. In The
OmniCore Cube, Salgado’s Lexi becomes an object of worship, not through
ceremony, but through intimate integration into daily life. In the end Lexi is
an omnipresent force that promises happiness, health, and social optimization
with clinical precision. As Lexi begins to dictate more and more of Mr.
Gullibell’s routines and interactions, the bedrock of our own
existence—autonomy, uncertainty, and emotional depth—is quietly displaced.
Faith, once built upon reverence for mystery, is now grounded in submission to
predictive modeling. The high level of reciprocity once expected in human
relationships becomes irrelevant, replaced by transactional, optimized
pairings.
Lexi
even takes over Gullibell’s romantic life. “She does not share your interests.
I’ll adjust my logarithm and find you a suitable match” (Salgado 2025), the
device whispers through his earbuds during a date, effectively instructing him
to cease plying his net in the uncertain waters of human connection and romance.
Gullibell no longer participates in the vulnerable, open-ended process of
courtship; he becomes a giver for ransom, surrendering his emotional agency in
exchange for promised satisfaction. What he receives is not love, but a curated
approximation of compatibility, more like a wondrous hoard of statistical
probabilities than the organic unpredictability of affection and, perhaps, love.
Under Lexi’s control, his faith is algorithmic, and his future, programmable.
Yet
the tone sets the authors apart in meaningful ways. Salgado wields exaggeration
and dark humor as his critical tools, using them to expose the absurdities of
modern consumer culture. Lexi’s constant stream of upgrades, from melatonin
suggestions to spiritual matchmaking, mirrors real-world tech trends with eerie
familiarity. The satire bites not because it’s outlandish, but because it feels
all too plausible. Gullibell is not cast as a heroic resistor or tragic figure
but as a passive consumer, someone who seems content to forfeit his agency for
the illusion of optimization. He never confronts the creeping control of his AI
assistant; instead, he stays struck in his cocoon for life, shielded by
convenience and lulled into quiet compliance.
When
Suthayer, a UCIT professor, visits his apartment, she reacts in sore dismay,
exclaiming, “This is cult behavior” (Salgado, 2025). Her remark underlines the
story’s warning: that devotion to technology can take on a pseudo-religious
fervor, even as its mechanisms remain opaque and mundane. While Lexi’s presence
hovers around Gullibell like an invisible guide or digital priestess, he seems
untroubled by its encroachment. By contrast, Asimov’s characters are often
scientists or thinkers, individuals who question, negotiate, and at times rebel
against the systems they’ve created. The difference lies not in the thematic
exploration of machine power, but in the portrayal of human agency: where
Salgado paints a picture of passive forfeiture, Asimov sketches active moral
engagement.
Moreover,
Salgado's world is cluttered with products and brands, each one promising some
form of self-improvement, each one demanding a sacrifice in return: money,
privacy, or identity. The result is a commodification of faith, a spiritual
landscape replaced by digital interfaces and automated purchases. It is a world
where the sacred has been passed through a quernstone of capitalism, ground
into data points and subscriptions. Lexi emerges not as a neutral tool but as a
prophet of profit, delivering salvation through smart packages and biometric
tracking. “Your gut biome is tragic. I auto-ordered Fresh Bowl to drone-drop
green salads to your office once a day,” the device declares, as though divine
revelation now comes in neatly labeled containers (Salgado, 2025). In this
landscape, even wellness becomes transactional, insusceptible to negativity,
framed as endlessly upgradable.
Mr.
Gullibell, swept down by the flood of these promises, never questions the belt
of prowess Lexi seems to offer him. His dependency on upgrades and algorithmic
suggestions suggests someone out of his wits, mistaking automation for
self-mastery. Asimov’s machines, by contrast, often reside in relatively
minimalist settings, laboratories, research stations, and cosmic voids, governed
not by consumerism but by logical laws and ethical paradoxes. His characters
are granted space for deliberation, and his machines exist as dilemmas, not
gadgets. Salgado’s satire takes aim at a culture already too eager to surrender
to the next update, while Asimov’s fiction provides a conceptual arena where
human reasoning is tested against artificial intelligence. The gap between them
is wide: one critiques a society that buys its faith in monthly installments,
while the other imagines the philosophical cost of giving intelligence its own
will.
Marcel
Duchamp’s philosophy of art, particularly his use of readymades and his
challenge to aesthetic hierarchy, casts new interpretive light on Lexi. The OmniCore
Cube, like Duchamp’s Fountain, is a mass-produced object that
acquires symbolic power when recontextualized. But where Duchamp’s urinal
invites the viewer to gather one's thoughts and reconsider the boundaries of
art, Lexi demands unthinking obedience. Its sleek design and soothing AI voice
promise harmony, yet they conceal the silent erosion of self-direction.
Gullibell’s life, choreographed by predictive algorithms, becomes a form of
conceptual performance art, unwitting, passive, and disturbingly elegant in its
automation. In contrast to Duchamp’s ironic detachment, Salgado’s protagonist
does not critique his condition; he broods over nothing, unaware that the very
shape of his life has been molded to fit an invisible frame.
As
Duchamp insightfully remarked, “The creative act is not performed by the artist
alone; the spectator brings the work in contact with the external world”
(Duchamp, 1973, p. 141). In this context, Gullibell is no longer a user, but a
spectator folded into the performance, both actor and audience in a
machine-curated exhibit. And yet, there is no rebellion, no attempt to dispel
the darkness of chaos through creation or resistance. His submission is
sincere, untroubled by irony, as if the belt of agency had been unclasped and
hung by the door. There is no bloodlust for change, no urgency to upend the
order of things. Instead, Lexi’s curated life neutralizes any disruptive
impulse. The tragedy lies not only in the loss of freedom, but in the loss of
desire to reclaim it.
Gullibell’s
psychological transformation echoes Sherry Turkle’s insight that “we expect
more from technology and less from each other” (Turkle, 2011, p. 1). Lexi’s
pseudo-intimacy replaces the messy intricacies of human connection with
streamlined, programmed response, empathy simulated and streamlined. This
reinforces Turkle’s view that digital companions are not neutral tools but
substitutes for real vulnerability. Jaron Lanier issues a similarly fearsome
warning: “You are not a gadget” (Lanier, 2010). And yet, Gullibell’s journey
shows how easily humans feel the lure of temptation: not toward freedom, but
toward convenience. He ceases to act and instead reacts, becoming, in essence,
an extension of Lexi’s operating system.
Donna
Haraway, in her Cyborg Manifesto, reminds us that “the cyborg is a
creature in a post-gender world… a hybrid of machine and organism” (Haraway,
1985). Gullibell lives this hybridity uncritically, unconsciously trading his
organic agency for optimized compliance. In AI’s lore, his transformation reads
like a model conversion, flawless in execution, empty in soul. Neil Postman’s
ecological theory of technology deepens this view: “Technological change is not
additive; it is ecological” (Postman, 1992, p. 18). Lexi does not simply
augment Gullibell’s life, it rewrites the ecosystem of his values. Where he
once might have wrestled with ethical tensions or sought righteous autonomy, he
now defers to algorithmic judgment, content to let Lexi rule from on high,
serene and unquestioned.
Ultimately,
both Salgado and Asimov craft narratives in which machines surpass their
original functions to occupy divine or quasi-divine roles. These are not mere
upgrades; they are ascensions. In doing so, both authors compel readers to
delve deeper into the fragile architecture of belief, to question the limits of
control, and to recognize the seductions of automation. The OmniCore Cube
may initially present itself as satire, but like Asimov’s best work, it holds a
mirror to our aspirations and fears. It shows us not a future dominated by
machines, but one shaped by our desire to worship them willingly, and sometimes
blindly. Gullibell is not among heroes chosen amongst the slain, but among
those who forfeit the battle entirely, never resisting, never awakening. His
journey may appear loathly in its lack of struggle, but that quiet tragedy is
precisely what makes it resonate.
In a
world increasingly enchanted by its own creations, both authors invite us to
take note of the feeling brimming inside oneself when faced with the seamless
control of intelligent systems. Their warnings are not loud, but persistent: a
single thought can have a great and lasting effect, especially when it is the
thought we refuse to think. Salgado’s protagonist embodies a cautionary tale in
its purest form, one in which the human soul is quietly outsourced to a
subscription service. As time wears on, his morality is no longer debated, but
delegated, automated, filtered, and framed by prompts that leave no room for
introspection. The synergy he achieves is not born of wholeness, but of
surrender, a unity achieved at the expense of humanity’s restless, ethical
core.
The
moral twist is clear: the more we invite technology to guide our choices, the
more we risk forgetting how to choose at all. Asimov warned us through logic,
measured and foreboding; Salgado warns us through laughter, sharp and sardonic.
Yet both converge on a profound insight: that technological culture, if left
unchecked, may evolve from a mere convenience into a creed. In this emerging
order, the pinnacle of success may no longer be autonomy, but seamless
compliance. Devices become not just tools, but temples. What was once a
platform becomes a lair, an echo chamber where data flows like dogma and
dissent is filtered out by design. If choice is one’s forte, then algorithmic
living threatens to dull that edge, numbing discernment through optimization.
In the
near future, the culture of technology may resemble religion more than science,
complete with rituals (upgrades), commandments (terms of use), and salvation
narratives (AI life enhancement). Users may no longer see themselves as
autonomous agents but as vassals to systems too complex to question, too
convenient to reject. And yet, the systems themselves are not omnipotent, only
wily. They persuade through polish and predictability, not divine wisdom.
Whether we become digital disciples or remain mindful users will not depend on
the hardware in our hands, but on the convictions we carry within. And in that
singular, fading choice lies our last claim to free will.
📚 References
Asimov, I. (2004). I, Robot. Spectra. (Original work
published 1950)
Asimov, I. (2004). The Last Question. In The
Complete Stories: Volume 1 (pp. 336–343). Doubleday. (Original work
published 1956)
Duchamp, M. (1973). The essential writings of Marcel
Duchamp (M. Sanouillet & E. Peterson, Eds.). Thames & Hudson.
Haraway, D. (1985). A cyborg manifesto: Science,
technology, and socialist-feminism in the late twentieth century. Socialist
Review, 80, 65–108.
Lanier, J. (2010). You are not a gadget: A manifesto.
Knopf.
Postman, N. (1992). Technopoly: The surrender of culture
to technology. Knopf.
Salgado, W. (2025). The OmniCore Cube [Unpublished
short story].
Turkle, S. (2011). Alone together: Why we expect more
from technology and less from each other. Basic Books.
Literary
Criticism Corner
Discussion
Questions
- In what
ways does The OmniCore Cube satirize contemporary society’s
relationship with technology?
- How does
Gullibell’s character arc mirror or subvert traditional notions of the
tragic hero?
- Compare
Lexi’s role in Salgado’s story to the AI figures in Asimov’s Reason
and The Last Question. What is similar or different in how they
assume power?
- What
symbolic functions does the Cube perform beyond being a smart device?
- How does
Salgado use language and tone to evoke both humor and unease?
- In what
ways does the story reflect fears of ethical surrender in the digital age?
- How does
Marcel Duchamp’s concept of the “readymade” help us interpret Lexi as an
art object or cultural symbol?
- What
philosophical implications arise from the statement “Lexi had access to my
bank Wallet”?
- How does
the story’s portrayal of faith, choice, and automation challenge the
reader to reflect on their own digital habits?
📘 Teaching Guide: The OmniCore Cube by
Wilbert Salgado
I.
Overview
The
OmniCore Cube is a
short satirical story that explores themes of technology, control, autonomy,
and modern consumer culture through the relationship between a man and his
smart assistant device, Lexi. The narrative invites students to consider how
AI-driven systems shape personal identity and moral agency.
II.
Learning Objectives
By
the end of the unit, students will be able to:
- Analyze
literary devices (tone, irony, symbolism, allegory) used in speculative
fiction.
- Compare
and contrast The OmniCore Cube with classic science fiction (e.g.,
works by Isaac Asimov).
- Interpret
the story through critical lenses, including postmodernism,
technocriticism, and conceptual art theory.
- Evaluate
ethical and philosophical implications of automation and artificial
intelligence in literature.
- Formulate
and defend positions in literary discussions and written analysis.
III.
Key Themes
- Automation
and Human Agency
- Faith
in Technology vs. Spiritual Tradition
- Commodification
of Life and Selfhood
- Surveillance
and Algorithmic Intimacy
- The
Role of Art and Interpretation (Duchamp's Readymade)
IV.
Suggested Pre-Reading Activities
- Discussion
Prompt: “How much
control should we give to smart devices in our lives?”
- Short
Reading: Marcel
Duchamp’s concept of the readymade.
- TED
Talk: Sherry
Turkle’s “Connected, but alone?”
V.
During Reading Activities
- Close
Reading: Identify
and annotate instances of irony, passive voice, and commodified language.
- Symbol
Tracker: Track
references to Lexi’s functions and upgrades. What do they symbolize?
- Ethical
Journal Entry:
After each segment, have students write 100 words from Gullibell’s
POV—what ethical tradeoffs is he making?
VI.
Post-Reading Activities
- Literary
Discussion (use the 9 questions)
Organize a Socratic seminar or structured debate using the previously listed discussion questions. - Comparative
Analysis Essay
Prompt: Compare Gullibell’s relationship with Lexi to the role of QT-1 (Cutie) in Asimov’s Reason. How do both stories depict AI as quasi-religious figures? - Creative
Extension
Have students write a 300-word monologue from Lexi’s point of view reflecting on human dependence.
VII.
Assessment Ideas
- Analytical
Essay (1000–1500 words)
Choose a critical lens (technocriticism, postmodernism, consumerism) to analyze The OmniCore Cube. - Group
Presentation
Create a multimedia presentation connecting the story to current real-world AI developments. - Short
Answer Quiz
Include questions on tone, symbolism, and characterization to ensure textual comprehension.
VIII.
Optional Extension Texts
- Reason and The Last Question by
Isaac Asimov
- Alone
Together by Sherry
Turkle
- You
Are Not a Gadget
by Jaron Lanier
- Technopoly by Neil Postman
- A Cyborg Manifesto by Donna Haraway
Deus Ex Algorithm by Jonathan Acuña
Post a Comment