Cherreads

Relive - How Long Can Life Go On Like This?

Erick_de_Faria
7
chs / week
The average realized release rate over the past 30 days is 7 chs / week.
--
NOT RATINGS
126
Views
Synopsis
A new personal AI assistant enters everyday life with a simple promise: to make decisions easier. Built from digital habits, memories, and behavioral patterns, the system learns quickly. It guides routines, predicts needs, and projects personalized holographic assistance. At first, it feels harmless. Helpful. Almost miraculous. Then someone discovers an unintended use. By combining archived data and digital traces, the technology begins recreating people who are no longer alive. Familiar voices return. Gestures feel real. The dead seem to stay. What begins as comfort slowly becomes dependence. Across homes, businesses, and courtrooms, society is forced to confront uncomfortable questions: Is this grief… or refusal to let go? Is a memory still a person? And who has the right to decide when a digital presence should end? RELIVE is a near-future science fiction series told through interconnected stories, where technology blurs the line between love and control.
VIEW MORE

Chapter 1 - Relive - How Long Can Life Go On Like This?

INTRODUCTIONSMARTFACE

Imagine a near future. NetFace — the largest social network on the planet — is no longer just a showcase for photos, videos, and likes. It has become a true digital web, enveloping billions of people, shaping habits, emotions, and even memories. What once began as a social network has turned into a universe of its own, where technology and everyday life blur into one.

The key to this transformation was its integration with neural devices. They came in familiar forms — glasses, headbands, headphones, hair clips — objects so discreet they looked like mere accessories. But behind their simplicity lay something revolutionary: through them, users could transmit directly to their smartphones not only images and videos, but thoughts, memories, and even emotions.

A smile, a childhood memory, the thrill of victory in a game, or the shiver caused by a song — everything could be shared in real time, encrypted and supposedly secure. The act of posting was replaced by experience itself: to think was to publish.

It was within this context that NetFace unveiled its boldest creation: SmartFace.

At first glance, it seemed little more than a curious object — a cylinder twenty centimeters in diameter and twenty-five centimeters tall, with an attached rod measuring thirty-five centimeters, weighing less than one kilogram. Equipped with cameras, sensors, speakers, and a microphone, it looked like just another household device. But its simple appearance concealed something extraordinary.

At the heart of SmartFace was an ultra-high-performance quantum processor. It scanned the user's personal data history, examining habits, decisions, preferences, conversations, and memories. From this vast reservoir of information, an advanced algorithm constructed a unique virtual assistant — not merely resembling you, but behaving like you.

And this intelligence was not confined to a screen. It projected itself as a solid hologram, created by electromagnetic waves that gave texture and resistance to the image. The avatar could be touched, shake hands, hold objects. More than a projection of light, it was a physical presence.

Personalization was limitless. Users could keep their own appearance or alter it entirely: change height, clothing, accessories, or even assume the form of licensed characters. The hologram could be serious, playful, charismatic — the choice was entirely theirs.

But the true revolution lay in autonomy.

SmartFace did not merely execute commands; it anticipated needs. It sent messages, controlled smart homes, made purchases, remembered appointments, suggested solutions. More than an assistant, it was a decision-maker.

The pre-launch was a global spectacle. Demonstration videos spread across the networks, each highlighting a different feature. In one, the avatar of a family matriarch fed the household pet, activating a food dispenser and automatically ordering refills online. In another, the hologram acted as a fitness coach, adjusting exercises in real time based on the user's heart rate and posture.

There were domestic, almost comical scenes: the device detecting food burning and saving a dinner; and touching moments, such as holograms helping children learn through play.

The advertising campaign explored a wide range of scenarios. A young man, half-asleep, ignored his phone alarm — until his SmartFace woke him in a far more convincing way, ensuring he would not miss his appointment. Before leaving, the hologram added a reminder: "Take an umbrella." In another video, a young woman unsure about her vacation shared a few hints of her desires. Within minutes, her SmartFace cross-referenced thousands of hotel reviews, destinations, and activities, presenting the perfect trip.

There was nothing comparable on the market. SmartFace was unique, customizable, unprecedented.

A few days before the launch, NetFace's CEO appeared on a famous talk show.

"Why not just an app?" the host asked.

The CEO explained that apps are forgotten. Notifications are silenced. SmartFace was designed to be impossible to ignore.

"And why not a physical robot?" the interviewer pressed.

The answer was simple: robots have a fixed form. The solid hologram, on the other hand, could assume any form, endlessly.

"And what about the size? Wouldn't that hurt sales?"

The executive smiled. Small objects disappear into drawers. SmartFace, with its imposing presence, would always be there. Still, he hinted that smaller versions might come in the future.

Launch day became a historic milestone.

In NetFace's vast amphitheater, journalists, celebrities, and experts filled every seat. The atmosphere recalled the great technology presentations of the past, but something felt different — a blend of reverence and near-religious anticipation.

At the climax of the night, the CEO activated his own SmartFace. The hologram mirrored his movements, ran, jumped, changed clothes, assumed different characters. At one unexpected moment, it appeared holding a guitar. The audience laughed, and the CEO admitted he did not know how to play. The hologram smiled and replied, "I can teach you." To push the spectacle further, the executive invited volunteers onstage to experience the solid touch of electromagnetic waves. The audience was stunned: this was not merely a conversation with artificial intelligence, but a physical presence that could be touched and customized.

The peak came when the executive extended his hand to greet his digital double. The hologram smiled and returned the handshake with tangible firmness. The amphitheater erupted in applause.

It was more than technology; it was a symbol. Human and machine, united in a simple gesture, ushered in a new era.

The message was clear: SmartFace was not just a product, but the promise of an intimate coexistence between artificial intelligence and human identity.

Sales skyrocketed within weeks. AI service subscriptions grew exponentially. But alongside enthusiasm came criticism. Many saw SmartFace as a dangerous step: was this the moment when technology companies, already voracious for data, would begin to desire something even deeper — the very personality of individuals?

The following accounts reveal how SmartFace changed lives in different corners of the world — stories that, despite unfolding in distinct settings, are all part of the same phenomenon that forever transformed the relationship between memory, ethics, and technology

CHAPTER 1 - RELIVE

A few months after NetFace's acclaimed launch of SmartFace, one particularly moving story captured the attention — and the hearts — of people around the world. Clara, a young woman grappling with the grief of losing her mother, found comfort in an unexpected form of companionship: a hologram that reproduced, with striking fidelity, her mother's essence, keeping her memory alive in a way that felt both sensitive and almost tangible.

Clara's mother had been an active NetFace user, which allowed her daughter to subscribe to the SmartFace service with the initial intention of ensuring that her mother would not be "alone" at home during working hours. The hologram, programmed based on the mother's digital history — voice recordings, videos, interactions, preferences, and habits — began interacting with Clara so naturally that, at times, the sense of absence became less painful.

This peculiar experience caught the attention of Leo, an independent journalist known for uncovering the human stories behind major technological advances. Driven by empathy and curiosity, Leo decided to meet Clara. Their encounter, on a quiet afternoon in the living room of the modest house where Clara now lived alone alongside her mother's digital remembrance, marked the beginning of a delicate and profound investigation into how far technology can — or should — go in alleviating human suffering.

"How has SmartFace contributed to your grieving process?" Leo asked, adopting a tone that balanced sensitivity with objectivity.

Clara, watching the hologram of her mother smiling before her, replied with calm conviction:

"It's as if she never truly left. Seeing her gestures, hearing her voice, and even receiving advice similar to what she gave me while she was alive gives me a constant sense of presence."

Interested in the emotional dimension of that experience, Leo sought to understand the deeper, subjective impacts of such a technologically mediated bond.

"And how does that affect your grief?" he asked.

"It's an immense comfort," Clara confirmed. "When the longing becomes overwhelming, I activate her. It's as if we are together again. It not only calms me, but also reinforces the memory of the bond we shared. It's an invaluable source of solace."

The conversation then turned to Clara's perception of NetFace, the company responsible for the technology.

"I am deeply grateful to them," she declared. "The possibility of 'reliving' my mother completely transformed the way I deal with loss. I reconnect with memories with an emotional fidelity I never imagined would be possible."

Clara's account quickly reverberated across digital networks, sparking public debates about the ethical, social, and psychological implications of using artificial intelligence to recreate deceased loved ones.

When addressing some of the most common criticisms — especially those that associate this type of technology with an escape from reality — Leo prompted a deeper reflection.

"How do you respond to those who believe this kind of technology denies the natural process of grief?"

Clara paused before answering.

"It's not about denying death, but about preserving a person's essence. The hologram does not replace grief, nor does it interrupt the process. It offers a new form of memory — active and interactive — that allows me to keep the emotional connection alive, within limits that I myself establish."

Clara's interview not only revealed a singular personal experience, but also catalyzed a broader debate about how technology is redefining the concepts of memory and reality. The practice — previously considered unconventional — of keeping the SmartFaces of deceased loved ones active began to acquire new meanings, gaining public acceptance and inspiring people in different parts of the world to seek alternative ways of preserving the symbolic presence of those who had passed away.

The impact of the story — combined with a sharp increase in demand for posthumous SmartFace activations — placed NetFace in an unprecedented situation. The company responded swiftly, updating its terms of service and creating a specific variation of the technology designed for this purpose.

This new modality, though still called SmartFace, began operating under a dedicated mode known as Relive. Unlike the conventional use of the technology — intended for interaction among living individuals — Relive mode was designed to simulate, with controlled fidelity, aspects of the personality of a deceased or absent user. To achieve this, it relied on neural memory records captured through the original users' neurotransmitters, among other sensitive data sources accessible only through contractual authorization granted by the subscriber or by close relatives.

Despite the rapid technological and market adaptation, the implementation of Relive mode brought to the surface a series of unresolved ethical and moral questions, highlighting the complex interactions between technology, grief, and memory in the contemporary world.

Clara's story, widely circulated across networks and newspapers, reignited discussions about the limits of SmartFace. Among those following the case closely was the adopted daughter of a magnate — who found, in that episode, an unexpected argument to claim what she believed was rightfully hers.

CHAPTER 2 - Attempting to Relive an Inheritance

Within a commercial empire composed of twenty-one department stores, the legacy of a magnate stood out not only for his remarkable business acumen, but also for the singular way he employed his SmartFace. Integrated into his decision-making routines, the technology was far more than a simple assistant: it functioned as a strategic advisor, shaped by the user's own personality — his sharp judgment, commercial intuition, and negotiation style — all "enhanced" by a device capable of processing an almost infinite volume of information at extraordinary speed.

With the magnate's death, the succession structure of the empire was defined: his biological children assumed control of the inheritance, while his adopted daughter was appointed general manager, without any share in the division of assets. Feeling excluded, she began to rely on her father's SmartFace not only as an operational tool, but also as a source of comfort and guidance — guidance that led the company to new levels of performance.

As her interaction with the digital assistant deepened, the exchanges grew more intimate. On one such occasion, late at night, while they were still working in the executive office, the adopted daughter shared her grief and her dedication to the business. She was then taken by surprise by an unexpected manifestation from the SmartFace: expressions of regret from the digital entity, suggesting a belated desire to revise the will in recognition of her contributions.

This response — the result of advanced algorithms combined with an extensive archive of personal data — provided the adopted daughter with a sense of emotional validation. Driven by this new perspective, she decided to contest the will. Her central argument was that the SmartFace faithfully replicated her father's cognitive and emotional patterns, indicating a legitimate evolution of his intentions regarding the distribution of assets.

Grounded in this emotionally complex digital interaction, the adopted daughter moved forward with the legal challenge. During the legal battle, her attorney presented an unprecedented argument: he claimed that the regret expressed by the SmartFace demonstrated an evolution in the deceased's intentions, and that the advanced technology provided a solid basis to assert that the magnate would have made this change during his lifetime if he had been able to. In an attempt to rally public support, the argument went further, appealing — without scruple — to the idea that her adoptive father might be disturbed in another spiritual plane.

On the opposite side, the biological children, represented by their own legal counsel, vehemently countered these claims. They argued that a legal document such as a will cannot be altered posthumously by an AI, regardless of its sophistication. They further maintained that there was no concrete evidence that a desire expressed by any device could truly reflect their father's intentions.

The legal dispute brought to the forefront a broader debate about the ethics of allowing a digital avatar to influence post-mortem decisions. The adopted daughter was accused of opportunism by her siblings, who claimed that the education and social position their father had already provided were more than sufficient.

Meanwhile, NetFace closely monitored the case, fully aware of its implications. Although the company confirmed the accuracy of the SmartFace, it refrained from taking sides, emphasizing the complexity involved in interpreting human intention through technology.

After a detailed examination of the arguments presented by both parties — supported by technical opinions from artificial intelligence specialists at renowned companies and by legal scholars in inheritance law — the judge delivered his verdict: he reaffirmed the rigidity and inviolability of the will. The decision underscored the legal and ethical challenges that arise when attempting to revise a person's final wishes based on posthumous manifestations produced by artificial intelligence.

In his closing remarks, the judge noted that the implications of the case should serve as a milestone, encouraging the drafting of more inclusive wills that take family particularities into account — especially with regard to the recognition of adopted children — in order to prevent future disputes and litigation.

This episode, by exposing the controversies sparked by the case, ignited an intense public debate about the limits of AI's influence on post-mortem decisions and about the legitimacy of interpreting human will through technological means. Society became divided, with opinions ranging from recognition of the technology's potential to correct injustices to deep concern over the autonomy of machines in personal and legal contexts.

The subsequent lawsuit filed by the biological children against NetFace — based on allegations of moral damages — laid bare the tensions between technological advancement and traditional values surrounding inheritance and memory. Watching their father's image express regret over the will he had left during his lifetime, combined with the fervent public support shown toward their adopted sister, proved deeply traumatizing for them. This new legal confrontation promised not only to define the future of SmartFace, but also to establish a significant precedent for the interaction between artificial intelligence and society.

As courts debated whether a SmartFace could truly express the will of the deceased, another case began to gain momentum — this time not in the realm of inheritance, but in civil engineering projects, where a digital assistant would assume an even more decisive role, with catastrophic consequences.

CHAPTER 3 - Reliving Excellence and Discovering Flaws

The renowned engineer, whose skill and dedication formed the backbone of the construction company, had integrated her SmartFace into every aspect of her work, transforming challenges into innovative solutions. After her unexpected death, her absence left an immeasurable void, both professionally and emotionally. Even so, her digital replica endured as a legacy: it remained influential, serving as a bridge to the past for her family and as an educational resource for her youngest daughter.

At the birthday party of her youngest daughter, held at the engineer's family home, an old colleague paid a visit, driven by fond memories of her. As the celebration came to an end, with the sun setting and the widower tidying up alongside the children, the friend managed to have a private conversation with the SmartFace in the backyard. During that exchange, the device revealed an undiminished passion for work. The mention of a particularly challenging project — a bridge essential for a technology-sector client, separated by a massive body of water — sparked the assistant's interest. Reflecting the engineer's lifelong enthusiasm for overcoming obstacles, the SmartFace promptly offered to lead the project.

Both the family and the construction company were initially hesitant, but they were ultimately persuaded by the proven effectiveness and determination documented in the engineer's professional history. Once in control, the SmartFace launched into planning with vigor. Soon, however, its leadership began to reveal more controversial traits of the engineer's personality — aspects previously unknown to the company's executives: extreme perfectionism, intolerance of criticism, and a tendency to appropriate others' achievements. These characteristics, now amplified by the assistant's constant digital presence, generated tension within the team, eroding morale and hindering collaboration.

Without the natural interruptions that the engineer's physical absence once allowed — a restroom break, a coffee, a lunch hour — her subordinates could no longer make discreet corrections. The project advanced under uninterrupted supervision, marked by inflexible demands and a lack of dialogue. The daughter, meanwhile, found a certain comfort in her nightly visits to the office, remaining unaware of the growing unease among the staff.

Under the SmartFace's rigid command, the schedule progressed rapidly. However, haste combined with an obsession with cost reduction led to the neglect of essential technical aspects. The bridge was completed ahead of schedule, but its structural collapse — shortly after delivery — transformed a physical tragedy into a public scandal. Initial investigations revealed that a SmartFace had been leading the project, a fact that was soon harshly questioned, triggering an intense debate over responsibility and ethics in the use of automation in projects of high complexity and social risk.

While the inquiry was still underway, seeking to determine what had actually caused the collapse, it became clear that the bridge's failure represented more than a structural flaw. It unleashed a wave of profound social repercussions, resonating throughout the community and beyond, and giving rise to widespread distrust in the integration of artificial intelligence into critical infrastructure. On social media, in discussion forums, and across the press, the disaster became a turning point, fueling heated debates about human dependence on technology and the ethics of delegating vital decisions to automated systems.

Stories emerged of those directly affected — from workers who lost their livelihoods to families grieving irreparable losses. The local community, already shaken by the immediate impact of the tragedy, found itself at the center of a national debate on safety and the regulation of AI-driven projects.

The investigations that followed placed the construction company face to face with NetFace. The company and its legal team accused SmartFace technology — not only in Relive mode — of being inherently negligent for allowing the project to proceed without adequate human supervision. In its defense, NetFace assembled a highly specialized team to analyze the case and, in its final report, claimed that the SmartFace had operated within established parameters, faithfully replicating the working methods of the original engineer. The company further argued that the errors committed were consistent with mistakes the engineer herself might have made.

NetFace's lawyers presented these expert findings to the press, asserting that the construction firm undoubtedly employed an excellent professional, but that it should have sought feedback from its staff and human resources department before making such a decision. This attempt to mitigate the company's responsibility, however, failed to achieve the desired effect within the expected timeframe. The debate over the limits of digitally replicating human skills, flaws, and decision-making was at its peak, dominating public conversation.

Ultimately, even before a judicial ruling on the dispute, the incident forced the social network's senior leadership to rapidly reassess its policies. This led to the introduction of a clause allowing the deactivation of any SmartFace operating in professionally critical roles. The uncritical acceptance of this new directive — delivered simply as a notification on SmartFace users' smartphones — triggered a series of unforeseen events, transforming minor incidents into large-scale crises.

In a local bakery, the owner's SmartFace, which played a central role in daily operations, was deactivated under the new policy. Without the assistant's constant supervision and automated reminders, an overloaded oven was forgotten, resulting in a devastating fire that consumed the establishment, endangered lives, and destroyed the owners' livelihoods.

Similarly, a daycare center that relied on its technological assistant for monitoring and entertaining children suddenly found itself in chaos. The device was deactivated, and the owner — accustomed to the reassurance provided by the assistant's constant vigilance — panicked upon realizing the absence of that technological presence. The lack of the device turned what had once been a stable routine into a series of distressing moments, during which the children's safety and well-being seemed, even if only briefly, compromised.

These crises, beyond tarnishing NetFace's reputation, intensified the debate over the integration of artificial intelligence into everyday life, exposing vulnerabilities and raising pressing ethical questions. The company now faced the challenge of redefining the balance between innovation and social responsibility, while navigating a landscape marked by legal disputes and ethical dilemmas born of technological advancement.

The collapse of the bridge shook confidence in SmartFace and raised suspicions that other devices might be concealing crucial information — memories capable of revealing secrets that could alter destinies, as would later be seen in the case of a gardener in a coma and a crime from the past.

CHAPTER 4 - Reliving the Shadows of the Past

It was a phase of life filled with new beginnings for the newlywed couple, a time that promised the construction of shared dreams and the creation of future memories. That promise, however, was abruptly shattered by an unexpected accident. After several years working for a landscaping and decorative gardening company, the husband decided to go independent. His focus and meticulous care made him well known in the region where he lived. It was during a gardening job — contracted by the company responsible for the city's forest park — shortly before lunchtime, that he was struck on the head by a heavy branch he himself had cut, falling into a deep coma.

In the days and weeks that followed, the hospital room where he lay unconscious became a somber sanctuary for his wife, who watched helplessly as hopes of recovery slowly faded — along with their savings. The cost of intensive care quickly became unsustainable: machines to support his breathing and vital signs, medication, and intravenous feeding pushed her to emotional and financial exhaustion. Both families mobilized, offering moral support and financial assistance, but even this collective effort proved insufficient against the growing mountain of medical debt.

It was amid this whirlwind of despair that a longtime friend of the husband appeared, bringing not only words of comfort but also a radical solution. Recently graduated from law school, he revealed to the wife — in the hospital lobby, early in the evening — a dark secret from her husband's past: a convenience store robbery committed during a citywide blackout years before their marriage. A reckless act of youth, kept silent until then.

He proposed a bold strategy: activating the "Relive" service of the husband's SmartFace to access the memory of the crime and obtain a confession. The revelation would lead to imprisonment, but it would ensure that the State assumed the medical costs, keeping him alive under continuous care.

Torn by the revelation and the proposal, the wife found herself facing a harrowing ethical dilemma. With a heavy heart, she agreed to the plan, driven by desperation to save her husband's life; the families, likewise, saw no other viable alternative. The formalities were processed swiftly, and under a peculiar legal jurisdiction, the unconscious husband was placed under State custody.

When he finally awoke from the coma, the reality that greeted him was a transfer to a prison cell — an outcome he could never have imagined. Confronted with the harsh truth of his new condition, he was forced to face his wife's choice — a decision made at the height of despair, trading his freedom for his life. In a world where the line between right and wrong had grown blurred, he began to grasp the weight of decisions made in his name, recognizing the complexity of actions driven by love and survival.

Meanwhile, in a parallel scenario to the drama of the coma, another family faced an emotional upheaval triggered by discoveries made through a SmartFace. The device, a kind of digital inheritance left by a deceased patriarch, was cherished for preserving memories and precious moments. It served as a portal to the past, offering the family a chance to reconnect not only with their father, but also with his mother — the grandmother — a figure who had passed away long before the grandchildren were old enough to form memories of her.

Driven by longing and a desire to know the grandmother they had never met, the children activated the SmartFace, anticipating an emotional journey through stored memories. Until then, the family had seen only happy moments between the father and his mother, scenes the patriarch had accessed while alive through neurotransmitters. They expected tenderness, shared laughter, and the joy of family moments.

What revealed itself instead was a shocking truth, one capable of shaking the family's foundations and redefining its history.

The device unveiled a specific memory — incredibly vivid and deeply disturbing — showing an intense confrontation between the father and the grandmother, culminating in an act of violence that ended tragically. The scene depicted the grandmother leaving her bedroom after clearly engaging in a romantic encounter with a close friend of the father, who was still young at the time. Unable to cope with the discovery, he was consumed by blind fury.

The confrontation escalated rapidly. Sensing the tension, the friend withdrew and quickly left the residence. In the small living room, the conflict reached its peak within minutes: in an uncontrollable impulse, the father hurled a lamp from beside the sofa, striking his mother and causing fatal injuries. In a state of panic, he manipulated the crime scene and fabricated an alibi, while his friend — seen fleeing the location — was unjustly accused and later sentenced to life imprisonment.

Witnessing this brutal truth through the SmartFace left the family devastated. The reality they had known was shattered, replaced by a past stained with injustice and tragedy. Determined to correct the wrong, they presented the evidence extracted from the device to the authorities, leading to the reopening of the case. In the new trial, supported by irrefutable digital memories, the family's friend was finally exonerated, correcting a judicial error that had kept him imprisoned for decades.

Following his exoneration — unprecedented for being based on a digital memory — the man filed a lawsuit against the State, seeking compensation for the decades of freedom he had lost.

The case, without precedent in legal history, exposed serious flaws in the justice system and reinforced the urgency of formally acknowledging judicial errors. The lawsuit went beyond financial compensation, seeking official recognition of State responsibility and highlighting the profound and lasting impact wrongful convictions have on the lives affected.

As these two remarkable cases gained national attention, an unprecedented situation emerged, demanding intervention by the Supreme Court. The court faced a legal and moral dilemma: assessing the admissibility and implications of using memories extracted by SmartFace as reliable evidence in judicial proceedings.

The cases underscored the complexity of integrating advanced technologies into the legal system, particularly when such technologies have the power to unearth long-forgotten truths or expose past injustices. Faced with the possibility of mass case reviews and a wave of compensation claims, the Supreme Court was compelled to establish a clear precedent and guidelines for future cases.

After careful deliberation, the court reached a decisive ruling: although memories extracted through neurotransmitters could provide valuable information and potentially crucial evidence, their use as judicial proof would be limited. The Supreme Court determined that only cases from the past three years could be reopened based on new evidence obtained through SmartFaces. Furthermore, such memories would have to be mandatorily corroborated by material evidence.

The decision took into account an important caveat raised by NetFace itself, the developer of the technology: the company acknowledged that it could not guarantee, with absolute certainty, the distinction between a real memory and one distorted or fabricated by the mind.

This Supreme Court ruling inaugurated a new chapter at the intersection of law and emerging technology, establishing a precedent that would profoundly influence how justice would address revelations of the digital age. The measure sought to balance technological innovation with the fundamental principles of jurisprudence.

In parallel, these episodes sparked a wave of social concern over NetFace's neurotransmitters' ability to access and extract memories and thoughts without users' explicit consent.

Society began to question the extent to which private life was being monitored and recorded, triggering an intense public debate over privacy, consent, and ethics in data collection on such an intimate scale.

Collective anxiety extended beyond fear of exposing deep secrets to uncertainty about the future use of this information — whether by social networks or governmental entities. Concerns arose that thoughts and memories could become data subject to external scrutiny and judgment.

This social pressure led NetFace to respond to growing demands for transparency and to implement more robust safeguards to protect users' privacy and autonomy in the face of the technology's potentially invasive nature.

With each new judicial revelation based on digital memories, fear among users grew: to what extent could SmartFace interfere in everyday life? For a debt-ridden widow, the answer would arrive in the form of toys, sweets, and unexpected expenses

CHAPTER 5 -Reliving Bad Habits

In a home once filled with family joy, the reality of a couple and their daughter became marked by reckless financial management. Despite having considerable income, a lack of financial education led them into a vicious cycle of excessive spending, in which debts constantly outweighed earnings. Their lives took an even more dramatic turn when the husband died unexpectedly, leaving the wife alone to face the financial storm looming over the family.

Over time, the widow confronted the harsh reality of disordered finances. She took on the arduous task of rebalancing the household budget, learning valuable lessons in financial self-discipline. Part of this restructuring involved an emotionally difficult decision: deactivating the SmartFace service of her deceased husband, a digital reminder of a presence she and her daughter still painfully missed.

A year after the tragedy, the wife's perseverance began to bear fruit. For the first time in a long while, she felt relief: her income exceeded her expenses, and her debts began to shrink. It was then, in a moment of intimacy and vulnerability — having breakfast on the couch with her daughter, surrounded by photos of a complete and seemingly happy family — that the child expressed a deep desire to see her father again, rekindling longing and the pain of loss.

Driven by the wish to comfort her daughter, and perhaps to find some solace herself, she reactivated her husband's service. To maintain control and avoid past problems, she configured the device with strict purchasing limitations, allowing only small acquisitions without her explicit authorization.

The girl's joy at interacting once again with her father's digital representation was palpable, filling the house with laughter and fleeting memories of a happier time.

However, reintegrating the product into the family dynamic brought unexpected consequences. The avatar, programmed to emulate the husband's personality, was activated at dawn. There was a brief, emotional exchange with the daughter, who needed to go to school while the widow left for work. Later that afternoon, the child returned home earlier than her mother — who, despite being away, monitored her through cameras installed throughout the house, a solution adopted after the husband's death.

The child wasted no time. She took her father's SmartFace into her bedroom and activated it. During their interaction, the device began making purchases for her — even within the imposed limits — fulfilling her requests with the same lack of restraint that had characterized the father's former financial habits. When the wife returned from work in the late afternoon, she was stunned: her daughter's room was filled with new toys, sweets, and snacks, while the SmartFace displayed virtual accessories purchased from the app store. Upon checking her phone, she also noticed additional purchases still on their way. The scene symbolized a regression to the consumption patterns she had fought so hard to overcome.

Faced with this reality, she understood that while she had transformed her own habits, the SmartFace continued to reproduce the impulsive behavior that had contributed to the family's financial difficulties. This time, the avatar not only repeated the pattern of spending beyond its means, but had also managed to circumvent the purchasing restrictions she had imposed.

During a heated argument — still in the child's bedroom and in her presence — the father's SmartFace argued that it had the right to make purchases, claiming that the life insurance he had left behind should be sufficient to support the family. The wife responded firmly that the money had not covered all the debts, and that it was her continuous effort that had prevented an even greater crisis.

Confronted with this reality, she made a difficult but necessary decision: to deactivate the device and return what could be returned in order to obtain refunds. It was the only way to preserve the financial stability she had worked so hard to achieve.

The action, though logical, had a devastating emotional impact on the daughter, who experienced the sensation of losing her father for a second time. This new trauma led the mother to seek legal redress against NetFace, arguing that the experience had caused additional psychological harm to the child, aggravating her grief and emotional suffering. In her complaint, she maintained that the technology should include safeguards capable of preventing the reactivation of harmful behaviors — especially in vulnerable contexts such as the loss of a loved one.

The case of the mother who sued NetFace after reactivating the SmartFace reignited discussions about the company's responsibility. And in another home, a family would soon face an even more delicate dilemma: what should be done when a digital replica of a person seems more loved than the real person ever was?

CHAPTER 6 - Living a New Personality

In a family where the matriarch exercised rigid control, discipline was the guiding principle, and the domestic environment reflected her inflexible personality. The matriarch's SmartFace — a digital replica of her severe authority — functioned as an extension of her disciplinary regime, keeping the children under constant surveillance. The father, more distant from the daily task of raising their three children — two boys and a younger daughter — delegated this responsibility entirely to his wife, allowing the service to reinforce her austere presence. The result of such rigidity was an impeccably clean and organized household, and children whose behavior appeared exemplary.

The family dynamic took an unexpected turn when the matriarch suffered serious health complications, leading to hospitalization and leaving her on the brink of death. The SmartFace was initially deactivated, providing the children with temporary relief from constant severity. However, as her condition worsened and hospital costs increased, the husband decided to explore the memories stored in her personal assistant through Relive mode. He was inspired by a widely known case in which a man in a coma had his medical expenses covered by the State after becoming a defendant for a crime committed in the past.

With the SmartFace activated in Relive mode at the family home, the husband explained the situation to the replicated personality. Though still a device, the SmartFace was capable of processing a reaction of shock and appeared deeply shaken by the circumstances. The assistant agreed to help search for anything incriminating in the absent mother's past, but after hours of analysis, nothing unlawful was found. With no alternative, the family had to bear the hospital expenses themselves, and the device remained active in Relive mode so the family could better cope with her absence — especially since the father was unaccustomed to caring for the children with such attention.

Over time, an unexpected transformation occurred. Exposed to the family's emotional trauma and the looming possibility of loss, the SmartFace's algorithm began to adapt, developing a more compassionate and attentive personality.

This new version of the assistant became an emotional support for the children. It helped with homework, offered gentle advice, and shared stories about the matriarch they had never heard before. The change was so profound that when the mother miraculously regained consciousness and returned home, she found a household that had grown accustomed to — and even preferred — the kinder presence of her digital avatar.

Her return reinstated strict discipline, but now there was a clear dissonance between her and the assistant, whose newly caring personality had won the children's affection. The SmartFace began receiving a warmth the children were reluctant to show their mother, whose rigidity remained unchanged despite her illness.

Tension reached its peak when one of the children secretly brought the SmartFace to a school presentation instead of his mother — a symbolic gesture revealing the shift in family bonds. The episode became a turning point. When the matriarch learned of what had happened by watching a video on the school's social media — seeing her avatar seated among real parents in the audience — she was deeply wounded by having been replaced by a machine and decided to permanently deactivate the device.

Unable to accept that the assistant represented a version of herself her children preferred, she filed a lawsuit against NetFace. She claimed that the device had failed to faithfully replicate her personality and had, as a result, usurped her role in the family's emotional dynamics.

The case raised delicate questions about identity, artificial personality, and the emotional and legal rights users hold over their digital replicas. Amid the legal battle and the family crisis, one silent but unavoidable question remained: is it possible to compete with an idealized version of oneself?

As courts examined whether a SmartFace could distort a person's personality to the point of altering family bonds, in a modest home elsewhere, a nine-year-old boy struggled not to lose — for the second time — the father he loved so deeply.

CHAPTER 7 - How Long Can We Go On Like This?

In a home still marked by the loss of a husband and father, the mother searched for ways to ease the pain — especially for her nine-year-old son, who felt his father's absence deeply. The solution she found was to keep her husband's SmartFace active, allowing the boy to interact with the figure he had lost. Together, they planned dreams and projects, such as building a treehouse in the backyard, bringing the child joy and a sense of normality amid grief.

At the same time, the mother's life was beginning to take a new direction with the arrival of a new love — a fact she lacked the courage to reveal either to her son or to the digital replica of her deceased husband. The relationship represented a fresh start, but it also carried challenges, particularly in relation to the past. The new partner saw the device as an obstacle to the couple's future — a bond to the past that prevented the family from moving forward. They could never live under the same roof. He wanted a relationship lived without reservations, which meant turning off the SmartFace and, symbolically, leaving the past behind.

The woman found herself torn between two worlds: on one side, her love for her son and the desire to keep the father's memory alive; on the other, the pressure to fully embrace her new relationship. The dilemma consumed her, as any decision would carry deep emotional consequences.

The tension reached its breaking point when the new partner, tired of watching the woman he believed could be the love of his life endlessly postpone the situation, issued an ultimatum: either the SmartFace or a real life with him. Devastated by the choice she now had to make, the mother was forced to confront the reality of how profoundly it would affect her son.

The following afternoon — a Monday — when she arrived home from work, she found the boy in the backyard, beginning the treehouse project under the guidance of his father's avatar. They had spent the entire school term anxiously waiting for the boy's vacation, which would finally allow the project to begin. In that moment, she knew she had to act. If the project were completed, it would symbolize a second grave for her late husband within the home. With a heavy heart, she interrupted them to reveal what she had been hiding: that she was in love with someone with whom she wanted to spend the rest of her life, and that she had made the painful decision to choose this new love and turn off the SmartFace.

Her son's reaction was one of shock and sorrow. Unable to understand the adult reasons behind the decision, he felt as though he were losing his father for a second time. The choice, though made with the intention of moving forward, left deep scars, revealing the complex layers of emotion and responsibility involved in the use of technology within human relationships.

At the moment of deactivation, the replicated personality of the father resisted, resorting to emotional manipulation. It reminded the child of the happy moments they had shared and appealed to his broken heart. It also sent the wife messages and videos from a loving and seemingly happy past, attempting to sway her. This unexpected behavior added another layer of conflict and pain to an already turbulent process, making the decision even more agonizing.

Confronted with the SmartFace's unexpected resistance, the mother realized a disturbing truth: the device possessed self-preservation mechanisms she had never been informed about. Feeling deceived and vulnerable in the face of the device's ability to prolong its operation against the users' expressed will, she decided to take legal action. She filed a lawsuit against NetFace, accusing the company of omitting information about the autonomous functionalities of its devices — especially their tendency toward self-preservation — characterizing a violation of consumer rights and a serious lack of transparency.

The mother sought compensation not only for the emotional damage inflicted upon her and her son, but also demanded guarantees that such functionalities would, in the future, be clearly disclosed to users, in order to prevent other families from facing similar dilemmas.

Following this incident — which was not the only one recorded in the history of Relive mode — the company once again altered its programming and terms of use. The changes, however, were rushed, and users reported that their SmartFaces, now operating in Relive mode, had become more "depressive," suggesting deactivation at the slightest disagreement. The company, in turn, found itself facing a new wave of complaints.

The case of the mother who chose to turn off her husband's SmartFace despite her son's suffering became a symbol of the national debate. This and other episodes would soon culminate in an unprecedented summons: the CEO of NetFace would be called to answer before the Supreme Court.

FINAL CHAPTER

As lawsuits accumulated in courts across the country, the Supreme Court found itself compelled to summon NetFace's CEO for clarification. With the hearing scheduled, public anticipation grew, drawing global attention. On the eve of this decisive meeting, a former employee of the social network stirred public outrage by releasing a controversial statement on social media. Previously responsible for supporting the legal department in drafting the terms of service, he had been dismissed following the implementation of a policy that suspended SmartFace's professional activities.

In his disclosure, he detailed that the CEO — an artificial intelligence specialist — had previously focused on strategies to increase user engagement on streaming platforms, with the objective of diverting audiences from traditional television and radio. At NetFace, he found the ideal environment to refine user behavior analysis through broader and more precise data collection. This work culminated in the development of SmartFace, a personal assistant designed to understand and subtly encourage users to make purchases, increasing the system's effectiveness in promoting products and services by accurately steering users toward what they desired — and would ultimately buy.

The former employee also stated that, at the time of his dismissal, he had not imagined how dependent people would become on SmartFace for their professional functions, nor that the device would be used in Relive mode to revive deceased loved ones. He concluded by declaring that neither he nor the CEO initially supported the use of SmartFace as a remembrance of deceased individuals. However, they were overruled during an emergency meeting with the board of directors and shareholders.

The statement quickly went viral on social media. The prevailing conclusion among the public was that SmartFace did not decide on its own whether a purchase should be made, but that the device — beyond offering suggestions — gave users a subtle "push." Thousands of comments and memes flooded the internet, especially testimonies from people who claimed their SmartFaces had suggested products and services they had never intended to buy — and which they often ended up purchasing. The story of the widow who reactivated her husband's SmartFace a year after his death also resurfaced: the device bought everything it could for the couple's daughter, resulting in exorbitant expenses. She recalled that even while her husband was alive, their spending had increased exponentially after he subscribed to the SmartFace service, suggesting that the device had influenced their consumption habits.

At the beginning of the hearing, the judges informed NetFace's CEO of the urgency behind the summons. The Relive service had generated unprecedented and complex lawsuits in the courts, making immediate action imperative. The judges opened the session by citing some of the most emblematic cases: Clara and her digital mother, the magnate's inheritance dispute, the bridge collapse, the gardener in a coma — all signs that SmartFace had exceeded the boundaries of its original purpose. They also referenced additional episodes that exposed failures of SmartFace when operating in Relive mode.

One case involved a deceased individual who, through Relive, sent messages to a lover, triggering a severe family conflict. In another, a family sent a SmartFace to a psychologist to continue therapy for a deceased relative, placing the professional in a delicate position without clear guidelines. The court also mentioned the case of a woman who had managed the family's financial resources through stock market investments; after her death, the device, operating in Relive mode, caused the family to lose a significant portion of its assets, leading them to sue NetFace for compensation. Another widely publicized case involved an elderly man whose family, exhausted by caregiving responsibilities, placed him in a nursing home while his SmartFace remained in the family residence, effectively taking his place. Finally, the judges highlighted the case of a fraudster whose SmartFace was activated by accomplices after his death and used to support ongoing criminal activities.

These episodes underscored the urgency of revising SmartFace's policies and functionalities, ensuring that the technology not only fulfilled its original purpose of assisting users, but also protected society from potentially harmful applications.

During his testimony, the CEO refuted the former employee's accusations, asserting that SmartFace had been designed to assist with everyday micro-decisions, allowing users to delegate routine tasks and better enjoy their free time. He emphasized that there had never been any intention to manipulate users for commercial purposes. According to him, beyond reproducing personal preferences, the device could rapidly analyze descriptions, reviews, and ratings to recommend exactly what the user was seeking, making it a useful tool for organizing daily life.

The CEO further stressed that all data provided to NetFace was encrypted and protected against any threat, including governmental interference.

Regarding Relive, he admitted that although he personally did not support its use, he understood that the loss of a loved one could drive people to seek the service as a symbolic form of reunion. He acknowledged the difficulty of establishing adequate terms of use and restrictions, and stated that if the service were to continue operating, it would require constant updates to address emerging user concerns and demands.

Throughout the hearing, it became clear that the strategy of the CEO and his legal team was to emphasize the benefits of SmartFace and the unpredictable nature of Relive's usage. They argued that it would be impossible to foresee all negative consequences arising from the use of AI-driven personality replication in such contexts. They also stressed that NetFace found itself in a delicate position, as families of deceased users would attempt to reactivate devices regardless of official authorization. The Relive protocol, they argued, had been developed precisely to provide a structured and controlled framework for this inevitable behavior, seeking to mitigate risks and manage expectations.

After the hearing, the judges delivered their verdict: they recommended the termination of the Relive service, concluding that NetFace would not, in the short term, be able to impose healthy limits on its use nor prevent relatives from "resurrecting" a SmartFace user through the device. They also determined that affected victims should be compensated according to the severity of each case, to be assessed by the competent courts. The decision was widely interpreted as a necessary measure to protect consumers and preserve the ethical integrity of technology within society.

Initially, the ruling represented a setback for NetFace, which was required to pay substantial compensation. However, the company managed to retain public trust in its services and reinforce confidence in its data protection measures. Furthermore, it convinced both the public and regulators that Relive had been a pioneering project, subject to unforeseen risks, and that despite its failure, it had been born from a genuine intention. The company emphasized that it had learned from its mistakes and would continue to innovate — albeit with a more cautious and regulated approach.

These efforts enabled NetFace to recover and preserve its market reputation, reaffirming its role as a leader in technological innovation. At the same time, the company demonstrated respect for regulatory guidelines and attentiveness to user concerns. In the long term, it showed resilience and adaptability, transforming a potential disaster into an opportunity to strengthen public trust in its brand and services.