The Human Cost Of Our AI-Driven Future

22 Trần Quốc Toản, Phường Võ Thị Sáu, Quận 3, Tp.HCM
Tiêu điểm
Tin tức: BÀI PHÁT BIỂU CỦA THỦ TƯỚNG ISRAEL - Benjamin Netanyahu Thư Giản: Nhớ mưa Sài Gòn... Tiền Tệ : Lượng tiền ngân hàng cho vay vượt huy động Tin tức: Cải cách thể chế nhìn từ cuốn sách “Vì sao các quốc gia thất bại” BĐS: TP.HCM dự kiến không cho phân lô bán nền tại các huyện ngoại thành Tin tức: Bên trong đơn vị UAV mật của Ukraine chuyên tấn công vào lãnh thổ Nga CN & MT: It’s Time To Give Up Hope For A Better Climate & Get Heroic VH & TG: 'Nexus’ - lược sử về những mạng lưới thông tin của loài người Tin tức: Thủ tướng chỉ rõ 2 điểm nghẽn lớn ở Đồng bằng sông Cửu Long Tin tức: China’s Real Economic Crisis Tin tức: KHỦNG HOẢNG TÀI CHÍNH ,KINH TẾ HAY KHỦNG KHOẢNG CƠ CẤU TOÀN DIỆN VH & TG: Ông Donald Trump, voi và nước xáo CN & MT: ChatGPT: ẢO VỌNG TOÀN NĂNG VÀ TƯƠNG LAI TOÀN TRỊ CN & MT: The planet endures its hottest summer on record — for the second straight year BĐS: Thị trường bất động sản sẽ phục hồi trong giai đoạn 2024 - 2027 CN & MT: AI – nỗi sợ của ‘dân văn phòng’ VH & TG: The Precondition For Global Cooperation VH & TG: Trung Quốc: trẻ thất nghiệp, già lo âu BĐS: Nhiều doanh nghiệp bất động sản đã vượt qua giai đoạn ‘sinh - tử’ CN & MT: Chăm lo nền móng VH & TG: Việt Nam có thể trở thành một trong cửu bá trong thế giới đa cực vào năm 2025 BĐS: Loạt mặt bằng vị trí 'vàng' TP HCM ế khách thuê nhiều năm BĐS: Tiến sĩ Lê Xuân Nghĩa: Tôi mua nhà năm 1990 hết 56 triệu đồng, bây giờ người ta gạ 20 tỷ đồng mà bà xã không chịu bán Thư Giản: Viết cho ngày doanh nhân 13/10 CN & MT: AI Is The Way Out Of Low Growth And Inflation CN & MT: Viễn cảnh 'hàng tỉ người giả' đáng sợ tạo ra nhờ AI VH & TG: Loài người trở nên thông minh như thế nào? Tin tức: Sử gia Harari: Hướng đi của nhân loại đang được quyết định tại Ukraina CN & MT: The Human Cost Of Our AI-Driven Future CN & MT: Việt Nam và Đông Nam Á sẽ hứng chịu mưa lớn bất thường vào cuối năm 2024 do La Nina BĐS: Bức tranh tín dụng bất động sản giai đoạn 2011-2022 Tin tức: Nobel kinh tế 2024 và bài học về thể chế cho Việt Nam CN & MT: Jensen Huang khen Elon Musk siêu phàm CN & MT: Bước tiếp theo cho tên lửa Starship của Elon Musk là gì? CN & MT: Dữ liệu vệ tinh vẽ nên bức tranh tổng thể về biến đổi khí hậu CN & MT: El Nino: Hồi chuông báo tử đe dọa nhân loại đã điểm Tin tức: Việt Nam có quyền lực như thế nào tại châu Á-Thái Bình Dương? CN & MT: Thủy lợi mang lại no ấm cho nông dân Tây Ninh Tin tức: Giải Nobel Kinh tế 2024 CN & MT: Châu thổ đang chìm: vấn nạn nan giải Tin tức: 7-Eleven đóng cửa 444 chi nhánh: Chuyện gì đang xảy ra với chuỗi siêu thị tiện lợi lớn nhất thế giới? Tin tức: Người nhập cư vào TP.HCM giảm mạnh, 'thủ phủ nhà trọ' thưa vắng người thuê Tin tức: Xe điện: Thêm một thảm bại của mô hình ‘chủ nghĩa tư bản nhà nước’ tại Trung Quốc SK & Đời Sống: Nền kinh tế cho người già SK & Đời Sống: Sôi động cuộc đua tìm phương thuốc kéo dài tuổi thọ BĐS: Sau hơn 1 tháng triển khai luật mới: Vẫn nhiều vướng mắc về đất đai BĐS: Shophouse ế ẩm, đóng cửa hàng loạt BĐS: Tiêu điều mặt bằng cho thuê tại TP. HCM BĐS: Giá thuê mặt bằng trung tâm quá cao, người kinh doanh rút về vùng ven TP.HCM Chứng khoán: La Nina hoạt động mạnh từ tháng 8, mưa nhiều chưa từng có, cổ phiếu ngành điện ra sao? BĐS: SO SÁNH TỒN KHO BẤT ĐỘNG SẢN 2015-2022. 10 ông lớn địa ốc tồn kho hơn 40 nghìn tỷ 62015 30.6.2015 BĐS: Những vùng tối của khủng hoảng nhà ở BĐS: Loạt doanh nghiệp bất động sản phát hành trái phiếu trở lại BĐS: TS. Cấn Văn Lực: “Ai làm bất động sản ở phân khúc nhà phố thương mại thì cần phải quan sát để cơ cấu lại” BĐS: 1 tỷ USD vốn FDI vào nhà đất: Trung Quốc, Nhật Bản, Malaysia... dẫn đầu làn sóng M&A : Nếu không sửa luật, dự án bất động sản sẽ tắc trong 10 năm tới Tin tức: Thời khắc đen tối nhất của Ukraine Tin tức: 3 quyết sách chiến lược để biến Việt Nam thành ‘con hổ kinh tế’ châu Á Tin tức: Đánh thuế bất động sản phải nghiên cứu kỹ, đừng xa rời thực tế Tin tức: Chân dung Blackstone – ‘Gã khổng lồ’ quản lý hơn 1.000 tỷ USD muốn đẩy mạnh đầu tư vào Việt Nam VH & TG: SMARTPHONE VÀ TÔI VH & TG: TÂM LINH VÀ MÊ TÍN VH & TG: Cận cảnh không gian sống của Elon Musk: Người giàu nhất thế giới ở “phòng đóng hộp” 37m2, nội thất tiện nghi kém xa nhà của nhiều người Thư Giản: Mùa nước tràn đồng VH & TG: Vùng Scandinavia, bao gồm các quốc gia như Thụy Điển, Na Uy và Đan Mạch (có thể bao gồm Phần Lan, Iceland) VH & TG: South Korea wakes up to the next K-wave: The 'silver economy' VH & TG: Lý Quang Diệu viết về những ngày cuối đời VH & TG: Bài của Tướng Trì Hạo Điền về mộng bá chủ thế giới của người Hán Tạp chí Các vấn đề chiến lược, Ấn Độ, 15/4/2009 VH & TG: Reagan đã không thắng trong Chiến tranh Lạnh như nhiều người nghĩ Thư Giản: BÍ QUYẾT SỐNG NHẸ NHÀNG  Tiền Tệ : KINH TẾ HOA KỲ NHẬT BẢN VÀ ANH TUẦN NÀY ( 16- 25/9/2024) SẼ ẢNH HƯỞNG ĐẾN THẾ GIỚI VH & TG: Thân phận phụ nữ ở Ấn Độ: Những gánh nặng kinh hoàng BĐS: Thử suy nghĩ BÀI HỌC TỪ TRUNG QUỐC CHO THỊ TRƯỜNG BẤT ĐỘNG SẢN VIỆT NAM....2024 Thư Giản: 5 câu chuyện Đại chiến lược của Thế giới 2020-2035. VH & TG: Hoàng đế diệt Phật bị quả báo bi thảm: Bài học lịch sử cho nhân loại ngày nay Tiền Tệ : NHNN điều chỉnh room tín dụng: Nhà băng nào hưởng lợi? Thư Giản: Thời kỳ thoái đã bắt đầu từ lâu - Dự báo 60 năm phần 2 Thư Giản: Dự báo 60 năm đầu thế kỷ 21 và hướng đến thế kỷ 22 Chứng khoán: Thời hoàng kim của chứng khoán Việt Nam 2007 Chứng khoán: “Đỉnh và đáy” cũng như “đêm với ngày”: Nhà đầu tư lão làng Charlie Munger tiết lộ triết lý đầu tư chưa khi nào lỗi thời để gặt hái thành công VH & TG: Đại tác giả KIM DUNG NÓI GÌ VỀ KINH PHẬT CHỮ HÁN ? VH & TG: Chuyến thăm lịch sử của Đặng Tiểu Bình và nước đi giúp Trung Quốc “lột xác”, vượt qua láng giềng đáng gờm Thư Giản: Hạn hán lớn nhất thời cổ đại, hoàng đế xin mưa và phép màu khiến muôn dân kinh ngạc VH & TG: Nhân loại trước ngã ba đường? Tiền Tệ : Cơ hội từ khủng hoảng 2008 Tiền Tệ : Tại sao Mỹ sẽ thắng trong cuộc Chiến tranh tiền tệ? Tiền Tệ : Giải bài toán nợ xấu ngân hàng tăng SK & Đời Sống: Sự thật về người đàn ông sống lâu nhất Trung Quốc, thọ xuyên 3 thế kỷ nhờ 1 thần chú ai cũng dễ dàng làm được SK & Đời Sống: 'Chẳng ai muốn chuyển ra Bình Chánh khi công việc còn trong quận 1' Chứng khoán: Thị trường chứng khoán Mỹ chìm trong sắc đỏ trong ngày Thứ hai đầu tuần SK & Đời Sống: Giới nhà giàu Việt chọn môi trường sống “giàu có trong thầm lặng” Chứng khoán: Chứng khoán bay mất 50 điểm, rúng động thị trường 200 tỷ USD SK & Đời Sống: Người già nông thôn – đường dài lệ thuộc con cháu Thư Giản: MỘT VÀI SỰ THẬT VỀ NHỮNG THỜI KỲ KHÓ KHĂN! SK & Đời Sống: Thành phố lớn nhất Việt Nam có hơn 1 triệu người cao tuổi, già hoá dân số nhanh, tuổi thọ trung bình 76,5 tuổi SK & Đời Sống: Đưa cây vào nhà, chăm chúng như con SK & Đời Sống: Phục hưng hành lang thiên nhiên - kinh tế - nhân văn dọc sông Sài Gòn SK & Đời Sống: Nghiên cứu khoa học: Sống gần gũi với thiên nhiên giúp chống lại bệnh tật, tốt cho tâm lý, kéo dài tuổi thọ! Thư Giản: NGHỊCH LÝ KHÔNG THỂ "NGƯỢC ĐỜI" HƠN CỦA NGƯỜI HIỆN ĐẠI Tin tức: CÁI GIÁ CỦA CHIẾN TRANH 2024 BĐS: Thời điểm vàng cho bất động sản hậu khủng hoảng CN & MT: Dự báo của Yuval Noal Harari về những biến đổi chính trị - xã hội trong thời đại số và những giải pháp cho xã hội tương lai CN & MT: Neuromorphic supercomputer aims for human brain scale BĐS: Doanh nghiệp trả mặt bằng hàng loạt BĐS: Mặt bằng 'bình dân' ở TP.HCM: Giảm giá phân nửa, giảm tiền cọc vẫn bỏ trống BĐS: Sóng 'tháo chạy' khỏi mặt bằng tiền tỷ khu vực trung tâm giờ ra sao? CN & MT: Trí tuệ nhân tạo đang thay đổi ngành bán lẻ Tin tức: Hệ lụy gì từ cuộc chiến mới ở Trung Đông? BĐS: Dấu ấn bất động sản quý 3: Những "đốm sáng" trong khó khăn Tin tức: Thế giới bắt đầu thời kỳ cấu trúc lại trật tư thế giới The World Begins to Reorder Itself Tin tức: IMF: Triển vọng kinh tế thế giới mấy năm tới chỉ ở “hạng xoàng” BĐS: Chuyên gia nêu rõ khó khăn lớn nhất của thị trường bất động sản hiện nay Tin tức: Nền kinh tế toàn cầu ra sao khi phải đối mặt với cuộc khủng hoảng mới trong cuộc chiến Israel-Gaza? Tin tức: Xung đột Israel - Hamas: Người ra mặt và kẻ giấu mặt CN & MT: Nếu Trái đất nóng hơn 2,5 độ so với thời tiền công nghiệp, ĐBSCL sẽ gặp nguy cơ Tin tức: Tỉ phú israel có con gái bị Hamas giết! : Vòm sắt - hệ thống đánh chặn tên lửa thành công hơn 90% của Israel? Tin tức: Thế giới đối mặt cùng lúc 5 căn nguyên của thảm họa và nguy cơ Thế chiến III CN & MT: Toyota chứng minh cho cả thế giới thấy 'không vội làm xe điện' là đúng: 1 startup làm 9 năm vẫn lỗ, càng bán càng không có lãi
Bài viết
The Human Cost Of Our AI-Driven Future

    Behind AI’s rapid advance and our sanitized feeds, an invisible global workforce endures unimaginable trauma.

    Velvet Spectrum for Noema Magazine

    A blurred screen flashes before our eyes, accompanied by a deceptively innocuous “sensitive content” message with a crossed-out eye emoji. The warning’s soft design and playful icon belie the gravity of what lies beneath. With a casual flick of our fingers, we scroll past, our feeds refreshing with cat videos and vacation photos. But in the shadows of our digital utopia, a different reality unfolds.

    In cramped, poorly lit warehouses around the world, an army of invisible workers hunches over flickering screens. Their eyes strain, fingers hovering over keyboards, as they confront humanity’s darkest impulses — some darker than their wildest nightmares. They cannot look away. They cannot scroll past. For these workers, there is no trigger warning.

    Tech giants trumpet the power of AI in content moderation, painting pictures of omniscient algorithms keeping our digital spaces safe. They suggest a utopian vision of machines tirelessly sifting through digital detritus, protecting us from the worst of the web.

    But this is a comforting lie.

    The reality is far more human and far more troubling. This narrative serves multiple purposes: it assuages user concerns about online safety, justifies the enormous profits these companies reap and deflects responsibility — after all, how can you blame an algorithm?

    However, current AI systems are nowhere near capable of understanding the nuances of human communication, let alone making complex ethical judgments about content. Sarcasm, cultural context and subtle forms of hate speech often slip through the cracks of even the most sophisticated algorithms.

    And while automated content moderation can, to a degree, be implemented for more mainstream languages, content in low-resourced languages typically requires recruiting content moderators from those countries where it is spoken for their language abilities. 

    Behind almost every AI decision, a human is tasked with making the final call and bearing the burden of judgment — not some silicon-based savior. AI is often a crude first filter. Take Amazon’s supposedly automated stores, for instance: It was reported by The Information that instead of advanced AI systems, Amazon relied on around 1,000 workers, primarily based in India, to manually track customers and record their purchases.

    Amazon told AP and others that they did hire workers to watch videos to validate people shopping, but denied that they had hired 1,000 or the implication that workers monitored shoppers live. Similarly, Facebook’s “AI-powered” M assistant is more human than software. And so, the illusion of AI capability is often maintained at the cost of hidden human labor.

    “We were the janitors of the internet,” Botlhokwa Ranta, 29, a former content moderator from South Africa now living in Nairobi, Kenya, told me two years after her Sama contract was terminated. Speaking from her home, her voice was  heavy as she continued. “We cleaned up the mess so everyone else can enjoy a sanitized online world.”

    And so, while we sleep, many toil. While we share, these workers shield. While we connect, they confront the disconnect between our curated online experience and the reality of raw, unfiltered human nature.

    The glossy veneer of the tech industry conceals a raw, human reality that spans the globe. From the outskirts of Nairobi to the crowded apartments of Manila, from Syrian refugee communities in Lebanon to the immigrant communities in Germany and the call centers of Casablanca, a vast network of unseen workers power our digital world. The stories of these workers are often a tapestry of trauma, exploitation and resilience, ones that reveal the true cost of our AI-driven future.

    We may marvel at the chatbots and automated systems that Sam Altman and his ilk extol, but this belies the urgent questions below the surface: Will our godlike AI systems serve as merely a smokescreen, concealing a harrowing human reality?

    In our relentless pursuit of technological advancement, we must ask: What price are we willing to pay for our digital convenience? And in this race towards an automated future, are we leaving our humanity in the dust?

    Abrha’s Story

    In February 2021, Abrha’s world shattered as his town in Tigray came under fire from both Ethiopian and Eritrean defense forces in the Tigray conflict, the deadliest modern-day conflict, which has been rightly called a genocide according to a report by the U.S.-based New Lines Institute.

    With just a small backpack and whatever cash he could grab, Abrha, then 26, fled to Nairobi, Kenya, leaving behind a thriving business, family and friends who couldn’t escape. As Tigray suffered under a more than two-year internet shutdown imposed by Ethiopia’s government, he spent months in agonizing uncertainty about his family’s fate.

    “Will our godlike AI systems serve as merely a smokescreen, concealing a harrowing human reality?”

    Then, in a cruel twist of irony, Abrha was recruited by the Kenyan branch of Sama — a San Francisco-based company that presents itself as an ethical AI training data provider, because the company needed people fluent in Tigrinya and Amharic, languages of the conflict he had just fled — to moderate content mostly originating from that same conflict.

    Five days a week, eight hours a day, Abrha sat in the Sama warehouse in Nairobi, moderating content from the very conflict he had escaped — even sometimes a bombing from his hometown. Each day brought a deluge of hate speech directed at Tigrayans, and dread that the next dead body might be his father, the next rape victim his sister.

    An ethical dilemma also weighed heavily on him: How could he remain neutral in a conflict where he and his people were the victims? How could he label retaliatory content generated by his people as hate speech? The pressure became unbearable.

    Though Abrha once abhorred smoking, he became a chain smoker who always had a cigarette in hand as he navigated this digital minefield of trauma — each puff a futile attempt to soothe the pain of his people’s suffering.

    The horror of his work reached a devastating peak when Abrha came across his cousin’s body while moderating content. It was a brutal reminder of the very real and personal stakes of the conflict he was being forced to witness daily through a computer screen.

    After he and other content moderators had their contracts terminated by Sama, Abrha found himself in a dire situation. Unable to secure another job in Nairobi, he was left to grapple with his trauma alone, without the support or resources he desperately needed. The weight of his experiences as a content moderator, coupled with the lingering effects of fleeing conflict, took a heavy toll on his mental health and financial stability.

    Despite the situation in Tigray remaining precarious in the aftermath of the war, Abrha felt he had no choice but to return to his homeland. He made the difficult journey back a few months ago, hoping to rebuild his life from the ashes of conflict and exploitation. His story serves as a stark reminder of the long-lasting impact of content moderation work and the vulnerability of those who perform it, often far from home and support systems.

    Kings’ Nightmarish Reality

    Growing up in Kibera, one of the world’s largest slums, Kings, 34, who insisted Noema solely use his first name to freely discuss personal health matters, dreamed of a better life for his young family. Like many young people raised in the Nairobi slum, he was unemployed.

    When Sama came calling, Kings saw it as his chance to break into the tech world. Starting as a data annotator,  who labeled and categorized data to train AI systems, he was thrilled despite the small pay. When the company offered to promote him to content moderator with a slight pay increase, he jumped at the opportunity, unaware of the implications of the decision.

    Kings soon found himself confronting content that haunted him day and night. The worst was what they coded as CSAM, or child sexual abuse material. Day after day, he sifted through texts, pictures and videos vividly depicting the violation of children. “I saw videos of children’s vaginas tearing from the abuse,” he recounted, his voice hollow. “Each time I closed my eyes at home, that’s all I could see.”

    The trauma infected every aspect of Kings’ life. At the age of 32, he had trouble being intimate with his wife; images of abused children plagued his mind. The company’s mental health support was grossly inadequate, Kings said. Counselors were seemingly ill-equipped to handle the depth of his trauma.

    Eventually, the strain became too much. Kings’ wife, unable to cope with the sexual deprivation and the changes in his behavior, left him. By the time Kings left Sama, he was a shell of his former self — broken both mentally and financially — his dreams of a better life shattered by a job he thought would be his salvation.

    Losing Faith In Humanity

    Ranta’s story begins in the small South African township of Diepkloof, where life moves in predictable cycles. A mother at 21, she was 27 years old when we spoke, and she reflected on the harsh reality faced by many young women in her community: six out of ten girls become pregnant by 21, entering a world where job prospects are already scarce and single motherhood makes them even more elusive.

    “Behind almost every AI decision, a human is tasked with making the final call and bearing the burden of judgment — not some silicon-based savior.”

    When Sama came recruiting, promising a better life for her and her child, Ranta saw it as her ticket to a brighter future. She applied and soon found herself in Nairobi, far from everything familiar. The promises quickly unraveled upon her arrival. Support for reuniting with her child, whom she had left behind in South Africa, never materialized as promised.

    When she inquired about this, company representatives told her that they could no longer cover the full cost as initially promised, and offered only partial support, to be deducted from her pay. Attempts to get an official audience with Sama were unsuccessful, with unofficial sources citing the ongoing legal proceedings with workers as the reason.

    When Ranta’s sister died, she said her boss gave her a few days off but wouldn’t let her switch to less traumatic content streams when she returned to moderating content — even though there was an opening. It was as if they expected her and other workers to operate like machines, capable of switching off one program and booting up another at will.

    Things came to a head during a complicated pregnancy. She wasn’t allowed to stay on bedrest as ordered by her doctor, and then just four months after giving birth to her second daughter, the infant was hospitalized.

    She then learned that the company had stopped making health insurance contributions shortly after she started working, despite continued deductions from her paycheck. Now she was saddled with bills she couldn’t afford to pay. 

    Ranta’s role involved moderating content related to female sexual abuse, xenophobia, hate speech, racism and domestic violence, mostly from her native South Africa and Nigeria. While she appreciated the importance of her job, she lamented the lack of adequate psychological counseling, training and support.

     Ranta found herself losing faith in humanity. “I saw things that I never thought possible,” she told me. “How can human beings claim to be the intelligent species after what I’ve seen?”

    Sama’s CEO has expressed regret over signing the content moderation contract with Meta. A Meta spokesperson said they require all partner companies to provide “24/7 on-site support with trained practitioners, an on-call service, and access to private healthcare from the first day of employment.”

    The representative also said it offered “’technical solutions to limit exposure to graphic material as much as possible.” However, the experiences shared by workers like Abrha, Kings, and Ranta paint a starkly different picture, suggesting a significant gap between Meta’s stated policies and the lived realities of content moderators.

    Global perspectives: Similar struggles across borders

    The experiences of Abrha, Kings and Ranta are not isolated incidents. In Kenya alone, I spoke to more than 20 workers who shared similar stories. Across the globe, in countries like Germany, Venezuela, Colombia, Syria and Lebanon, data workers we spoke to as part of our Data Workers Inquiry project told us they faced similar challenges.

    In Germany, despite all its programs to help new arrivals, immigrants with uncertain status still end up in roles like Abrha’s, reviewing content from their home countries. These workers’ precarious visa situations added a layer of vulnerability. Many told us that despite facing exploitation, they felt unable to speak out publicly. Because their employment is tied to their visas, the risk of being fired and deported looms.

    In Venezuela and Colombia, economic instability drives many to seek work in the data industry. While not always directly involved in content moderation, many data annotators often work with challenging datasets that can negatively impact their mental well-being. 

    Reality often doesn’t match what was advertised. Even if data workers in Syria and Syrian refugees in Lebanon aren’t moderating content, their work often intersects with digital remnants of the conflict they’ve experienced or fled, adding a layer of emotional strain to their already demanding jobs.

    The widespread use of Non-Disclosure Agreements (NDAs) is yet another layer in the uneven power dynamic involving such vulnerable individuals. These agreements, required as part of workers’ employment contracts, silence workers and keep their struggles hidden from public view.

    The implied threat of these NDAs often extends beyond the period of employment, casting a long shadow over the workers’ lives even after they leave their jobs. Many workers who spoke to us insisted on anonymity out of fear of legal repercussions.

    These workers, in places like Bogotá, Berlin, Caracas and Damascus, reported feeling abandoned by the companies profiting off their labor. The so-called “wellness programs” offered by Sama were often ill-equipped to address the deep-seated trauma these workers were experiencing, employees told me.

    “We were the janitors of the internet. We cleaned up the mess so everyone else can enjoy a sanitized online world.”

    — Botlhokwa Ranta

    Their stories make clear that behind the sleek facade of our digital world lies a hidden workforce that bears immense emotional burdens, so we don’t have to. Their experiences raise urgent questions about the ethical implications of data work and the human cost of maintaining our digital infrastructure. The global nature of this issue underscores a troubling truth: The exploitation of data workers is not a bug, it’s a systemic feature of the industry.

    It’s a global web of struggle, spun by tech giants and maintained by the silence of those trapped within it, as documented by Mophat Okinyi and Richard Mathenge, former content moderators and now co-researchers in our Data Workers’ Inquiry project. The two have seen these patterns repeat across a slew of different companies in multiple countries. Their experiences, both as workers and now as advocates, underscore the global nature of this exploitation.

    The Trauma Behind the Screen

    Before I traveled to Kenya, I thought I understood the challenges data workers face through my conversations with some online. However, upon arrival, I was confronted with stories of individual and institutional depravity that left me with secondary trauma and nightmares for weeks. But for the data workers themselves, their trauma manifests in two primary ways: direct trauma from the job itself and systemic issues that compound the trauma.

    1. Direct Trauma 

    Every day, content moderators are forced to confront the darkest corners of humanity. They wade through a toxic swamp of violence, hate speech, sexual abuse and graphic imagery. 

    This constant exposure to disturbing content takes a toll. “It goes beyond what makes people human,” Kings told me. “It’s like being forced to drink poison every day, knowing it’s killing you, but you can’t stop because it’s your job.” The images and videos linger after work, haunting their dreams and infiltrating their personal lives.

    Many moderators report symptoms of post-traumatic stress and vicarious trauma: nightmares, flashbacks and severe anxiety are common. Some develop a deep-seated mistrust of the world around them, forever changed by the constant exposure to human cruelty. As one worker told me, “I came into this job believing in the basic goodness of people. Now, I’m not sure I believe in anything anymore. If people can do this, then what’s there to believe?”

    When the shift ends, trauma follows these workers home. For Kings and Okinyi, like so many others, their relationships crumbled under the weight of what they saw but could not speak of. Children grow up with emotionally distant parents, partners become estranged, and the worker is left isolated in their pain.

    Many moderators report a fundamental shift in their worldview. They become hypervigilant, seeing potential threats everywhere. Okinyi mentioned how one of his former colleagues had to move from the city to the less crowded countryside due to paranoia over potential outbursts of violence. In a zine she created for the Data Workers Inquiry about Sama’s female content moderators, one of Ranta’s interviewees spoke of how the job made her constantly question her worth and ability to mother her children. 

    2. Systemic Issues

    Beyond the immediate trauma of the content itself, moderators face a barrage of systemic issues that exacerbate their suffering:

    • Job Insecurity: Many moderators, especially those in precarious living situations like refugees or economic migrants, live in constant fear of losing their jobs. This fear often prevents them from speaking out about their working conditions or seeking help. Companies often exploit this vulnerability.
    • Lack of Mental Health Support: While companies tout their wellness programs, the reality falls far short. As Kings experienced, the counseling provided is often inadequate, with therapists ill-equipped to handle the unique trauma of content moderation. Sessions are often brief and fail to address more underlying, deep-seated trauma.
    • Unrealistic Performance Metrics: Moderators often must review hundreds of pieces of content per hour. This relentless pace leaves no time to process the disturbing material they’ve seen, forcing them to bottle up their emotions. The focus on quantity over quality not only affects the accuracy of moderation but also exacerbates the psychological toll of the work. As Abrha told me: “Imagine being expected to watch a video of someone being killed, and then immediately move on to the next post. There’s no time to breathe, let alone process what we’ve seen.”
    • Constant Surveillance: As if the content itself wasn’t stressful enough, moderators are constantly monitored. Practically every decision and essentially every second of their shift is scrutinized, adding another layer of pressure to an already overwhelming job. This surveillance extends to bathroom breaks, idle time between tasks and even facial expressions while reviewing content. Supervisors monitor workers through computer tracking software, cameras, and in some cases, physical observation. They tend to pay attention to facial expressions to gauge workers’ reactions and ensure they maintain a level of detachment or “professionalism” while reviewing disturbing content. As a result, workers told me they felt like they couldn’t even react naturally to the disturbing content they were viewing. Workers were given an hour of break time daily for all their extraneous needs — eating, stretching, the bathroom — any additional time engaged in those or other non-work activities would be scrutinized and time would be added to their shifts. Abrha also mentioned that workers had to put their phones in lockers, further isolating them and limiting their ability to communicate with the outside world during their shifts.

    “The exploitation of data workers is not a bug, it’s a systemic feature of the industry.”

    And the ripples extend beyond the family: Friends drift away, unable to relate to the moderator’s new, darker perspective on life; social interactions become strained, as workers struggle to engage in “normal” conversations after spending their days immersed in the worst of human behavior.

    In essence, the trauma of content moderation reshapes entire family dynamics and social networks, creating a cycle of isolation and suffering that extends far beyond the individual.

    Traumatizing Humans To Create “Intelligent” Systems

    Perhaps the cruelest irony is that we’re traumatizing people to create the illusion of machine intelligence. The trauma inflicted on human moderators is justified by the promise of future AI systems that will not require human intervention. Yet, their development requires more human labor and often the sacrifice of workers’ mental health.

    Moreover, the focus on AI development often diverts resources and attention from improving conditions for human workers. Companies invest billions in machine learning algorithms while neglecting the basic mental health needs of their human moderators.

    The AI illusion distances users from the reality of content moderation, much like factory farming distances us from the treatment of egg-laying chickens. This collective willful ignorance allows exploitation to continue unchecked. The AI narrative is a smokescreen that obscures a deeply unethical labor practice that trades human well-being for a facade of technological progress.

    Digital Workers Of The World Rise!

    In the face of exploitation and trauma, data workers have not been passive. Across the globe, workers have attempted to unionize, but their efforts have often been hindered by various actors. In Kenya, workers formed the African Content Moderators Union, an ambitious effort to unite workers from different African countries.

    Mathenge, who is also part of the union’s leadership, told me he believes he was dismissed from his role as a team lead due to his union activities. This retaliation sent a chilling message to other workers who were considering organizing.

    The struggle for workers’ rights recently gained significant legal traction. On Sept. 20, a Kenyan court ruled that Meta could be sued there for dismissing dozens of content moderators by its contractor, Sama. The court upheld earlier rulings that Meta could face trial over these dismissals and could be sued in Kenya over alleged poor working conditions. 

    The latest ruling has potentially far-reaching implications for how the tech giant works with its content moderators globally. It also marks a significant step forward in the ongoing battle for fair treatment and recognition of data workers’ rights.

    The obstacles continue beyond the company level. Organizations employ union-busting tactics, often firing workers who agitate for unionization, Mathenge said. During conversations with workers, journalists and civil society officials in the Kenyan digital labor space, whispers of senior government officials demanding bribes to formally register the union emerged, adding another layer of complexity to the unionization process.

    Perhaps most bizarrely, according to an official from the youth-led civic organization Siasa Place, when workers in Kenya attempted to form their own union, they were instead told to join the postal and telecommunication union, a suggestion that ignores the vast differences between these industries and the unique challenges faced by today’s data workers.

    Despite these setbacks, workers have continued to find innovative ways to organize and advocate for their rights. Okinyi, together with Mathenge and Kings formed the Techworker Community Africa, a non-governmental organization focused on lobbying against harmful tech practices like labor exploitation.

    Other organizations have also stepped up to help the workers, like Siasa Place, and digital rights lawyers like Mercy Mutemi have petitioned the Kenyan parliament to investigate the working conditions at AI firms.

    A Path To Ethical AI & Fair Labor Practices

    Industry-wide Mental Health Protocols

    We need a comprehensive, industry-wide approach to mental health support. Based on my research and conversations with workers, I propose a multi-faceted approach not offered by existing support systems.

    Many existing company programs are often superficial “wellness programs” that fail to address the deep-seated trauma experienced by data workers. These may include occasional group sessions or access to general counseling services, but they are typically insufficient and not tailored.

    My proposed approach includes mandatory, regular counseling sessions with therapists trained specifically in trauma related to data work. Additionally, companies should implement regular mental health check-ins, provide access to 24/7 crisis support, and offer long-term therapy services, which are largely absent in current setups.

    Crucially, these services must be culturally competent, recognizing the diverse backgrounds of data workers globally. This is a significant departure from the current one-size-fits-all approach that often fails to consider the cultural contexts of workers in places like Nairobi, Manila or Bogotá. The proposed system would offer support in workers’ native languages and be sensitive to cultural nuances surrounding mental health — aspects sorely lacking in many existing programs.

    “Companies invest billions in machine learning algorithms while neglecting the basic mental health needs of their human moderators.”

    Moreover, unlike the current system where mental health support often ends with employment, this new approach would extend support beyond the tenure of the job, acknowledging the long-lasting impacts of this work. This comprehensive, long-term and culturally-sensitive approach represents a fundamental shift from the current tokenistic and often ineffective mental health support offered to data workers.

    “Trauma Cap” Implementation

    Just as we have radiation exposure limits for nuclear workers, we need trauma exposure limits for data workers. This “trauma cap” would set strict limits on the amount and type of disturbing content a worker can be exposed to within a given timeframe.

    Implementation could involve rotating workers between high-impact and low-impact content, mandatory breaks after exposure to particularly traumatic material, limits on consecutive days working with disturbing content and the allocation of annual “trauma leave” for mental health recovery.

    We need a system that tracks not just the quantity of content reviewed, but one that accounts for emotional impact. For example, a video of extreme violence should count more toward a worker’s cap than a spam post.

    Independent Oversight Body

    Self-regulation by tech companies has proven insufficient; it’s essentially entrusting a jackal with the chicken coop. We need an independent body with the power to audit, enforce standards and impose penalties when necessary.

    This oversight body should consist of ethicists, former data workers, mental health professionals and human rights experts. It should have the authority to conduct unannounced inspections of data work facilities, set and enforce industry-wide standards for working conditions and mental health support, and provide a safe channel for workers to report violations without fear of retaliation. Crucially, any oversight body must include the voices of current and former data workers who truly understand the challenges of such work.

    The Role Of Consumers & The Public In Demanding Change

    While industry reforms and regulatory oversight are crucial, the power of public pressure cannot be overstated. As consumers of digital content and participants in online spaces, we all have a role to play in demanding more ethical practices. This involves informed consumption, educating ourselves about the human cost behind content moderation.

    Before sharing content, especially potentially disturbing material, we should consider the moderator who might have to review it. This awareness might influence our decisions about what we post or share. We must demand transparency from tech companies about their content moderation practices.

    We can use companies’ own platforms to hold them accountable by publicly asking questions about worker conditions and mental health support. We should support companies that prioritize ethical labor practices and consider boycotting those that don’t.

    Moreover, as AI tools become increasingly prevalent in our digital landscape, we must also educate ourselves about the hidden costs behind these seemingly miraculous technologies. Tools like ChatGPT and DALL-E are the product of immense human labor and ethical compromises.

    These AI systems are built on the backs of countless invisible individuals: content moderators exposed to traumatic material, data labelers working long hours for low wages and artists whose creative works have been exploited without consent or compensation. In addition to the staggering human cost, the environmental toll of these technologies is alarming and often overlooked.

    From the massive energy consumption of data centers to the mountains of electronic waste generated, the ecological footprint of AI is a critical issue that demands our immediate attention and action. By understanding these realities, we can make more informed choices about the AI tools we use and advocate for fair compensation and recognition of the human labor that makes them possible.

    Political action is equally important. We need to advocate for legislation that protects data workers, urge our political representatives to regulate the tech industry, and support political candidates who prioritize digital ethics and fair labor practices.

    It’s crucial to spread awareness about the realities of data work through use of our platforms so that we can inform people about the stories of people like Abrha, Kings, and Ranta and encourage discussions about the ethical implications of our digital consumption.

    We can follow and support organizations like the African Content Moderators Union and NGOs focused on digital labor rights and amplify the voices of data workers speaking out about their experiences to help bring about meaningful change.

    Most people have no idea what goes on behind their sanitized social media feeds and the AI tools they use daily. If they knew, I believe they would demand change. Public support is necessary to ensure the voices of data workers are heard.

    By implementing these solutions and harnessing the power of public demand, we can work toward a future where the digital world we enjoy doesn’t come at the cost of human dignity and mental health. It’s a challenging path, but one we must traverse if we are to create a truly ethical digital ecosystem.

    By Adio Dinika - From NoemaMag

    THỐNG KÊ TRUY CẬP
    • Đang online 48
    • Truy cập tuần 973
    • Truy cập tháng 1852
    • Tổng truy cập 147275