The Human Cost Of Our AI-Driven Future

22 Trần Quốc Toản, Phường Võ Thị Sáu, Quận 3, Tp.HCM
Tiêu điểm
CN & MT: Planetary Eclipse Tin tức: Quy mô nhân viên của Agribank tăng lên gần 41.000 người, bằng 12 ngân hàng cộng lại và vượt xa BIDV, VietinBank, Vietcombank CN & MT: Công nghệ vũ trụ và trí tuệ nhân tạo của con người vẫn còn quá chậm so với dự đoán của Kubrick từ gần 60 năm trước! CN & MT: Nhật Bản phát triển công nghệ nâng nhà lên không trung khi xảy ra động đất Tin tức: Ngành hàng nào sẽ giúp thị trường cho thuê mặt bằng bán lẻ TP.HCM tăng tốc? Tin tức: Giải mã ba tháng cầm quyền của Tổng thống Donald Trump 2.0 Tin tức: Tài liệu giải mật: Tính toán của CIA về việc sử dụng vũ khí hạt nhân trong cuộc chiến tranh ở Việt Nam CN & MT: Xe điện Trung Quốc đang ở đâu? Tin tức: THOMAS FRIEDMAN : "TÔI KHÔNG TIN MỘT LỜI NÀO TRUMP VÀ PUTIN NÓI VỀ UKRAINE". SK & Đời Sống: Trưởng thành - chiếc áo quá rộng với thế hệ Y? Tin tức: TP.HCM không còn là miền “đất hứa” cho lao động ngoại tỉnh? SK & Đời Sống: 10 LƯU Ý KHI MUA LẠI HÀNG QUÁN MÀ CHỦ QUÁN NÊN BIẾT  Tin tức: Long An ký kết 3 thỏa thuận hợp tác đầu tư với các đối tác Nhật Bản CN & MT: Bill Gates tiên đoán tuần làm việc 2 ngày không còn xa vì con người sắp bị thay thế trong nhiều ngành nghề, muốn tự làm cũng không bắt kịp công nghệ CN & MT: Dự báo La Nina và thời tiết mùa hè nóng kỷ lục Tin tức: Trung tâm thương mại chật vật để tồn tại VH & TG: Hàng triệu nhà hàng Trung Quốc đóng cửa: Vì đâu nên nỗi? VH & TG: CÂU CHUYỆN KHÔN NGOAN CN & MT: TƯƠNG LAI CON CHÁU CHÚNG TA SẼ LÀM GÌ.??? SK & Đời Sống: -Food For Thought- VH & TG: Toward a North American Economic Union VH & TG: Hàng triệu nhà hàng Trung Quốc 'chết yểu', sống không quá 500 ngày Tiền Tệ : Lịch Sử và Chu Kỳ của giá Vàng BĐS: “Hoang mang” những con phố thời trang của Sài Gòn VH & TG: Chứng minh của khoa học về thế giới bên kia – P2: Thí nghiệm với nhà ngoại cảm SK & Đời Sống: Thế hệ bất hạnh nhất Tin tức: Thị trường bán lẻ TP.HCM: Trung tâm thương mại 'lên ngôi', nhà phố lao dốc? Tin tức: TPHCM: Tiểu thương chợ đầu mối "khóc ròng" vì ế CN & MT: Nền kinh tế hydro - Hiện thực hay giấc mơ? BĐS: Thị trường bất động sản sắp thay đổi lớn vào 2026 VH & TG: ĐẠI SUY THOÁI 2025 2035 CN & MT: Phân tích bản đồ động đất Đông Nam Á, nguy cơ của Việt Nam đến đâu? VH & TG: Goldman Sachs bi quan về kinh tế Mỹ Tin tức: Đối với Trung Quốc và Hoa Kỳ, cuộc chiến thương mại của Trump lần này có vẻ tồi tệ hơn nhiều VH & TG: TẠI SAO NƯỚC MỸ DƯỚI THỜI TỔNG THỐNG TRUMP 2.0 TẬP TRUNG VÀO NGÀNH SẢN XUẤT (MANUFACTURING)? VH & TG: Hệ thống quyền lực xuyên Đại Tây Dương đã đến điểm không thể vãn hồi CN & MT: Bản đồ nhiệt: Đường nào cháy da, phố nào đổ lửa Tin tức: Cách Chợ Bến Thành 20km, cảng biển này sẽ được tiếp nhận tàu khách quốc tế Tin tức: VIỆT NAM NHÌN VÁN CỜ CỦA TT TRUMP  CN & MT: Khí nhà Kính CO2 Cao Nhất trong 800.000 năm VH & TG: Vấn Đề Lớn Ở Nhật Bản  CN & MT: Earth in 2025 CN & MT: AI VÀ CON NGƯỜI: AI HUẤN LUYỆN AI? CÂU CHUYỆN TỪ CON CHÓ CỦA PAVLOV ĐẾN KỶ NGUYÊN TRÍ TUỆ NHÂN TẠO SK & Đời Sống: Tin tức sáng 30-3: Tốc độ già hóa dân số Việt Nam nhanh nhất châu Á, TP.HCM già nhanh nhất nước Thư Giản: Millennials - thế hệ kẹt giữa gen X và gen Z: Vì sao chúng ta khác biệt? BĐS: NÊN ĐẦU TƯ HAY ĐỨNG NGOÀI QUAN SÁT? BĐS: NHỮNG CÚ SỐC ĐẦU TƯ: KHI BẤT ĐỘNG SẢN KHÔNG DỄ ĂN NHƯ BẠN NGHĨ! BĐS: Khốn khổ vì giá thuê căn hộ tăng cao BĐS: Đấu giá 3.790 căn hộ tái định cư bỏ trống tại Thủ Thiêm Chứng khoán: Chứng khoán Việt Nam sau tròn 3 năm sau lập đỉnh lịch sử: Có thêm gần 5 triệu tài khoản, VN-Index “bốc hơi” 300 điểm BĐS: TP.HCM: Nhiều dự án tái khởi động dự kiến có giá bán tăng gấp 2-3 lần giá cũ BĐS: Nhìn lại lịch sử các chu kì tăng trưởng, chuyên gia dự báo bất ngờ về bức tranh bất động sản năm 2025 BĐS: Novaland – khi gã khổng lồ bị quật ngã : Nếu không sửa luật, dự án bất động sản sẽ tắc trong 10 năm tới VH & TG: China’s Pet Parents Choose ‘Fur Kids’ Over Human Children Tiền Tệ : Kinh tế - Chính trịKinh tế Việt Nam từ 2010 đến 2023 và con đường phía trước (phần C) Tiền Tệ : Kinh tế - Chính trị Kinh tế Việt Nam từ 2010 đến 2023 và con đường phía trước (phần B) Tiền Tệ : Kinh tế Việt Nam từ 2010 đến 2023 và con đường phía trước (phần A) CN & MT: Chống biến đổi khí hậu: Càng già càng hay CN & MT: Nước Mỹ vận hành chương trình tài trợ khoa học cho doanh nghiệp nhỏ ra sao? Tin tức: ‘Ông lớn’ năng lượng Hàn Quốc sắp khởi công nhà máy điện LNG 3 tỷ USD tại Long An BĐS: Bất Động Sản Thương Mại đối mặt năm Định Mệnh SK & Đời Sống:  BƯỚC ĐỂ MỞ MỘT QUÁN CAFE CÓC "ÍT VỐN, NHIỀU LỜI" SK & Đời Sống: Con người, nếu không có tiền! VH & TG: Chúng ta đang sống trong một Thế Giới đang đảo chiều VH & TG: KHI NÀO THÌ TRUNG QUỐC ĐÁNH ĐÀI LOAN? [Phần 2] VH & TG: KHI NÀO THÌ TRUNG QUỐC ĐÁNH ĐÀI LOAN? [Phần 1] VH & TG: Nhà nghiên cứu âm nhạc Nguyễn Thuỵ Kha  SK & Đời Sống: Vì sao nhiều người trúng giải độc đắc giàu nhanh nhưng lại dễ “tan cửa nát nhà”, rơi vào bi kịch nghèo vẫn hoàn nghèo? VH & TG: Giáo sư Vũ Minh Khương: Học hỏi từ 7 chữ S của Singapore BĐS: Giá căn hộ TP.HCM tăng tới 40%: Cạn nguồn cung, giá bán lập kỷ lục mới VH & TG: MỘT THỜI ĐẠI ĐANG CÁO CHUNG VH & TG: Các nhà tài phiệt Mỹ là gót chân Achilles của Trump Tin tức: Xã hội TP.HCM đầu tư mạnh 8 công trình chiến lược kết nối với Long An Tin tức: TẢN MẠN CUỐI TUẦN: ĐIỀU KHÁC BIỆT  Tin tức: TPHCM xây thêm 7 tuyến metro sau khi Metro số 1 khánh thành VH & TG: Ukraine: Khi đồng minh tháo chạy? BĐS: Bất động sản 2025: Tồn kho cao thì lo, tồn kho thấp chưa chắc đã mừng CN & MT: (I) GEOFFREY HINTON: CHA ĐỠ ĐẦU CỦA TRÍ TUỆ NHÂN TẠO, VÀ CON ĐƯỜNG DẪN ĐẾN GIẢI NOBEL CN & MT: Trung Quốc chế tạo pin hạt nhân hoạt động hơn 100 năm không cần sạc? BĐS: Sẽ chuyển gần 4.000 căn hộ tái định cư tại Thủ Thiêm sang nhà ở thương mại CN & MT: Kỹ sư máy tính từng ở hang ổ lừa đảo thốt lên: "Công nghệ của chúng tiên tiến hơn những gì tôi từng biết” SK & Đời Sống: Hỏi DeepSeek, ChatGPT "Đang thất nghiệp làm gì để kiếm ra tiền": AI phân tích kỹ càng, đưa ra câu trả lời cực bất ngờ khiến nhiều người tỉnh ngộ VH & TG: Tác động từ sự trở lại của chính trị bộ lạc dưới thời Tổng thống Trump BĐS: Bất động sản tỉnh nhưng giá ngang với TP.HCM VH & TG: Sự kết thúc của toàn cầu hóa như chúng ta biết BĐS: Dòng tiền đầu tư bất động sản dịch chuyển ra vùng ven VH & TG: Quản trị nhà nước kiểu Trung Quốc: Vừa tập quyền, vừa tản quyền Tin tức: Cựu Thống đốc Ngân hàng TW Anh quốc vừa lên làm Thủ tướng Canada thay ông Trudeau! Tin tức: Tình hình kinh tế Mỹ ngày càng khó lường Tin tức: Học giả Thái Lan: Vai trò dẫn dắt ASEAN và vị thế toàn cầu lớn hơn của Việt Nam CN & MT: Năng lượng tái tạo toàn cầu vào năm 2027 trong Báo cáo Điện lực 2025 của IEA CN & MT: Mỹ để mắt tới chương trình chiến đấu cơ thế hệ mới của châu Âu - Nhật Bản Tin tức: Mỹ đã ‘giúp’ Trung Quốc trở thành cường quốc sản xuất như thế nào? Tin tức: Thương chiến toàn cầu leo thang SK & Đời Sống: Thử luận cách chữa huyết áp CAO và THẤP: Tin tức: Ông Trump bổ sung thuế với TQ, chứng khoán Thượng Hải, Thâm Quyến và Hồng Kông lao dốc VH & TG: Liệu Trump có gây ra “sự sụp đổ” thứ tám? Thư Giản: NĂM CHỮ CỦA NGƯỜI XƯA SK & Đời Sống: 60 TUỔI TRỞ LÊN, BẠN DỰA VÀO AI?  SK & Đời Sống: 10 BÀI HỌC "NHỚ ĐỜI" KHI MỞ QUÁN CỦA MẸ TÔI VÀ ANH HÀNG XÓM BĐS: Thị trường cho thuê nhà phố tại TP.HCM đang trải qua đợt giảm giá mạnh BĐS: LONG AN – MIỀN ĐẤT HỨA HAY MIỀN ĐẤT GỒNG? BĐS: “MẮC NGHẸN” VỚI CĂN SHOP HOUSE MUA 18 TỶ, CHO THUÊ 20 TRIỆU BĐS: Năm 2025: Chưa thể mua nhà ở BĐS: ĐẦU TƯ BĐS VEN TPHCM - ĐỊA PHƯƠNG NÀO NGON NHẤT ??? BĐS: Hết thời ôm đất nông nghiệp chờ hạ tầng? Thư Giản: Bức thư của nhà khoa học Newton năm 1704 tiên đoán về ngày tận thế BĐS: KẾ HOẠCH KINH DOANH BẤT ĐỘNG SẢN CỦA CÁC CHỦ ĐẦU TƯ TRONG NĂM 2025 VH & TG: Buồn của nền kinh tế lớn thứ hai thế giới: Dân số 1,4 tỷ người nhưng thiếu lao động trầm trọng ở cơ sở y tế nông thôn, bác sĩ lương tháng 3,4 triệu đồng, tự bỏ tiền túi mua thiết bị Tin tức: Ngẫm bài học tăng trưởng từ Trung Quốc và Ấn Độ BĐS: Năm 2025, giá chung cư chưa thể hạ nhiệt? Thư Giản: Ước vọng thay đổi Thư Giản: 34 LỜI DẠY CỦA LÃO TỬ Thư Giản: Elon Musk bật mí 6 PHƯƠNG PHÁP HỌC độc đáo, làm việc 1 năm bằng người khác làm 8 năm: Thú vị nhất là QUY TẮC 2 PHÚT Tin tức: Kinh tế Trung Quốc giai đoạn mới và hàm ý cho Việt Nam Tiền Tệ : TP. Hồ Chí Minh: Tiền gửi vào hệ thống ngân hàng đạt hơn 4 triệu tỷ đồng Tiền Tệ : Mô hình kinh tế hiện đại đã thất bại như thế nào? Thư Giản: Nhìn lại thế giới 2024 và dự đoán tương lai Thư Giản: Ở Sài Gòn rất dễ sống phải không? Tin tức: Ukraine 'khóa van', kỷ nguyên khí đốt của Nga tại châu Âu kết thúc Thư Giản: Ngắm nhìn "hẻm xanh" giữa lòng đô thị Tiền Tệ : Chính sách tiền tệ năm 2025 sẽ đối mặt với không ít thách thức BĐS: Thị trường bất động sản năm 2024: Hai thái cực ở hai đầu đất nước BĐS: Người trong cuộc bất ngờ “chỉ điểm” diễn biến mới của thị trường địa ốc đầu năm 2025 Chứng khoán: VinaCapital: 2025 có thể là năm biến động đối với thị trường chứng khoán và nền kinh tế Tiền Tệ : Quyết định hạ lãi suất của Fed có thể 'giáng đòn' lên hàng loạt NHTW trên toàn cầu như thế nào? VH & TG: NGỘ 12 LUẬT NHÂN QUẢ BẤT BIẾN TRONG CUỘC ĐỜI Chứng khoán: "Chỉ báo Warren Buffett" cao chưa từng có trong lịch sử, gióng hồi chuông cảnh báo nhà đầu tư về mối nguy của TTCK Mỹ Chứng khoán: Chủ tịch FiinGroup: Hầu hết đầu tư cá nhân đang chịu lỗ VH & TG: Tỷ phú Elon Musk nói thẳng 1 ĐIỀU càng cố tỏ ra hoàn hảo thì con người càng kém giá trị: Tránh được sớm sẽ giàu sớm Chứng khoán: Nỗi buồn chưa từng có của thị trường chứng khoán Việt Nam: Con số kỷ lục trong hơn 24 năm hoạt động Tin tức: Thế chiến thứ III đã bắt đầu? Chứng khoán: La Nina hoạt động mạnh từ tháng 8, mưa nhiều chưa từng có, cổ phiếu ngành điện ra sao? Chứng khoán: Thời hoàng kim của chứng khoán Việt Nam 2007 VH & TG: Đại lão Hòa thượng Hộ Tông Vansarakkhita (1893-1981) Tin tức: CÁI GIÁ CỦA CHIẾN TRANH 2024 2025 Tin tức: Thế giới đối mặt cùng lúc 5 căn nguyên của thảm họa và nguy cơ Thế chiến III CN & MT: "Báo động đỏ" về khí hậu VH & TG: Nghiên cứu 75 năm của ĐH Harvard: Đây là KIỂU NGƯỜI hạnh phúc nhất, không liên quan gì đến giàu sang, danh vọng! Tin tức: Phố nhậu xập xình nhất TPHCM ế vêu, chủ quán ngồi chờ… dẹp tiệm Tin tức:  2050 Nhân loại đang ở ngã ba đường Tin tức: 20 rủi ro toàn cầu lớn nhất năm 2024, suy thoái kinh tế và thời tiết cực đoan nằm top đầu VH & TG: Câu chuyện Chúa Giê Su ‘sang Phương Đông tu tập’ được kể lại ra sao? SK & Đời Sống: Giáo sư từng đoạt giải Nobel suốt đời tuân theo 6 điều, bảo sao sống thọ 101 tuổi: Tập thể dục hay uống nước cũng gác lại sau VH & TG: Henry Kissinger: Làm thế nào để tránh xảy ra Thế chiến 3? (P1) CN & MT: Dự báo của Yuval Noal Harari về những biến đổi chính trị - xã hội trong thời đại số và những giải pháp cho xã hội tương lai Tin tức: Dấu ấn ODA Nhật Bản tại Đồng bằng sông Cửu Long CN & MT: Làm cây thông đứng giữa trời mà… lo Tin tức: 9 vấn đề định hình nền kinh tế lớn nhất thế giới vào năm 2024: Từ lạm phát, tăng trưởng GDP đến TikTok, ChatGPT CN & MT: Năng lượng và biến đổi khí hậu CN & MT: Trí tuệ nhân tạo đang thay đổi ngành bán lẻ Tin tức: Trung Quốc chấm dứt 30 năm phát triển mạnh, hết thời làm mưa làm gió trên thế giới? CN & MT: Châu Âu: Thế thượng phong của ô tô điện - bao lâu nữa? CN & MT: Ai là tác nhân chính gây biến đổi khí hậu? Tin tức: Hệ lụy gì từ cuộc chiến mới ở Trung Đông? CN & MT: Kỷ nguyên bùng nổ AI: Linh hồn của thời kỳ Siliconomy Tin tức: Khủng hoảng tại WTO và cảnh báo về sự phân mảnh của kinh tế toàn cầu Tin tức: Dự báo rủi ro lạm phát dai dẳng ở Mỹ Tin tức: Trump làm tổng thống Mỹ Thế giới bắt đầu thời kỳ cấu trúc lại trật tư thế giới The World Begins to Reorder Itself Tin tức: IMF: Triển vọng kinh tế thế giới mấy năm tới chỉ ở “hạng xoàng” CN & MT: Nếu Trái đất nóng hơn 2,5 độ so với thời tiền công nghiệp, ĐBSCL sẽ gặp nguy cơ CN & MT: Diễn biến đáng lo ở Nam Cực
Bài viết
The Human Cost Of Our AI-Driven Future

    Behind AI’s rapid advance and our sanitized feeds, an invisible global workforce endures unimaginable trauma.

    Velvet Spectrum for Noema Magazine

    A blurred screen flashes before our eyes, accompanied by a deceptively innocuous “sensitive content” message with a crossed-out eye emoji. The warning’s soft design and playful icon belie the gravity of what lies beneath. With a casual flick of our fingers, we scroll past, our feeds refreshing with cat videos and vacation photos. But in the shadows of our digital utopia, a different reality unfolds.

    In cramped, poorly lit warehouses around the world, an army of invisible workers hunches over flickering screens. Their eyes strain, fingers hovering over keyboards, as they confront humanity’s darkest impulses — some darker than their wildest nightmares. They cannot look away. They cannot scroll past. For these workers, there is no trigger warning.

    Tech giants trumpet the power of AI in content moderation, painting pictures of omniscient algorithms keeping our digital spaces safe. They suggest a utopian vision of machines tirelessly sifting through digital detritus, protecting us from the worst of the web.

    But this is a comforting lie.

    The reality is far more human and far more troubling. This narrative serves multiple purposes: it assuages user concerns about online safety, justifies the enormous profits these companies reap and deflects responsibility — after all, how can you blame an algorithm?

    However, current AI systems are nowhere near capable of understanding the nuances of human communication, let alone making complex ethical judgments about content. Sarcasm, cultural context and subtle forms of hate speech often slip through the cracks of even the most sophisticated algorithms.

    And while automated content moderation can, to a degree, be implemented for more mainstream languages, content in low-resourced languages typically requires recruiting content moderators from those countries where it is spoken for their language abilities. 

    Behind almost every AI decision, a human is tasked with making the final call and bearing the burden of judgment — not some silicon-based savior. AI is often a crude first filter. Take Amazon’s supposedly automated stores, for instance: It was reported by The Information that instead of advanced AI systems, Amazon relied on around 1,000 workers, primarily based in India, to manually track customers and record their purchases.

    Amazon told AP and others that they did hire workers to watch videos to validate people shopping, but denied that they had hired 1,000 or the implication that workers monitored shoppers live. Similarly, Facebook’s “AI-powered” M assistant is more human than software. And so, the illusion of AI capability is often maintained at the cost of hidden human labor.

    “We were the janitors of the internet,” Botlhokwa Ranta, 29, a former content moderator from South Africa now living in Nairobi, Kenya, told me two years after her Sama contract was terminated. Speaking from her home, her voice was  heavy as she continued. “We cleaned up the mess so everyone else can enjoy a sanitized online world.”

    And so, while we sleep, many toil. While we share, these workers shield. While we connect, they confront the disconnect between our curated online experience and the reality of raw, unfiltered human nature.

    The glossy veneer of the tech industry conceals a raw, human reality that spans the globe. From the outskirts of Nairobi to the crowded apartments of Manila, from Syrian refugee communities in Lebanon to the immigrant communities in Germany and the call centers of Casablanca, a vast network of unseen workers power our digital world. The stories of these workers are often a tapestry of trauma, exploitation and resilience, ones that reveal the true cost of our AI-driven future.

    We may marvel at the chatbots and automated systems that Sam Altman and his ilk extol, but this belies the urgent questions below the surface: Will our godlike AI systems serve as merely a smokescreen, concealing a harrowing human reality?

    In our relentless pursuit of technological advancement, we must ask: What price are we willing to pay for our digital convenience? And in this race towards an automated future, are we leaving our humanity in the dust?

    Abrha’s Story

    In February 2021, Abrha’s world shattered as his town in Tigray came under fire from both Ethiopian and Eritrean defense forces in the Tigray conflict, the deadliest modern-day conflict, which has been rightly called a genocide according to a report by the U.S.-based New Lines Institute.

    With just a small backpack and whatever cash he could grab, Abrha, then 26, fled to Nairobi, Kenya, leaving behind a thriving business, family and friends who couldn’t escape. As Tigray suffered under a more than two-year internet shutdown imposed by Ethiopia’s government, he spent months in agonizing uncertainty about his family’s fate.

    “Will our godlike AI systems serve as merely a smokescreen, concealing a harrowing human reality?”

    Then, in a cruel twist of irony, Abrha was recruited by the Kenyan branch of Sama — a San Francisco-based company that presents itself as an ethical AI training data provider, because the company needed people fluent in Tigrinya and Amharic, languages of the conflict he had just fled — to moderate content mostly originating from that same conflict.

    Five days a week, eight hours a day, Abrha sat in the Sama warehouse in Nairobi, moderating content from the very conflict he had escaped — even sometimes a bombing from his hometown. Each day brought a deluge of hate speech directed at Tigrayans, and dread that the next dead body might be his father, the next rape victim his sister.

    An ethical dilemma also weighed heavily on him: How could he remain neutral in a conflict where he and his people were the victims? How could he label retaliatory content generated by his people as hate speech? The pressure became unbearable.

    Though Abrha once abhorred smoking, he became a chain smoker who always had a cigarette in hand as he navigated this digital minefield of trauma — each puff a futile attempt to soothe the pain of his people’s suffering.

    The horror of his work reached a devastating peak when Abrha came across his cousin’s body while moderating content. It was a brutal reminder of the very real and personal stakes of the conflict he was being forced to witness daily through a computer screen.

    After he and other content moderators had their contracts terminated by Sama, Abrha found himself in a dire situation. Unable to secure another job in Nairobi, he was left to grapple with his trauma alone, without the support or resources he desperately needed. The weight of his experiences as a content moderator, coupled with the lingering effects of fleeing conflict, took a heavy toll on his mental health and financial stability.

    Despite the situation in Tigray remaining precarious in the aftermath of the war, Abrha felt he had no choice but to return to his homeland. He made the difficult journey back a few months ago, hoping to rebuild his life from the ashes of conflict and exploitation. His story serves as a stark reminder of the long-lasting impact of content moderation work and the vulnerability of those who perform it, often far from home and support systems.

    Kings’ Nightmarish Reality

    Growing up in Kibera, one of the world’s largest slums, Kings, 34, who insisted Noema solely use his first name to freely discuss personal health matters, dreamed of a better life for his young family. Like many young people raised in the Nairobi slum, he was unemployed.

    When Sama came calling, Kings saw it as his chance to break into the tech world. Starting as a data annotator,  who labeled and categorized data to train AI systems, he was thrilled despite the small pay. When the company offered to promote him to content moderator with a slight pay increase, he jumped at the opportunity, unaware of the implications of the decision.

    Kings soon found himself confronting content that haunted him day and night. The worst was what they coded as CSAM, or child sexual abuse material. Day after day, he sifted through texts, pictures and videos vividly depicting the violation of children. “I saw videos of children’s vaginas tearing from the abuse,” he recounted, his voice hollow. “Each time I closed my eyes at home, that’s all I could see.”

    The trauma infected every aspect of Kings’ life. At the age of 32, he had trouble being intimate with his wife; images of abused children plagued his mind. The company’s mental health support was grossly inadequate, Kings said. Counselors were seemingly ill-equipped to handle the depth of his trauma.

    Eventually, the strain became too much. Kings’ wife, unable to cope with the sexual deprivation and the changes in his behavior, left him. By the time Kings left Sama, he was a shell of his former self — broken both mentally and financially — his dreams of a better life shattered by a job he thought would be his salvation.

    Losing Faith In Humanity

    Ranta’s story begins in the small South African township of Diepkloof, where life moves in predictable cycles. A mother at 21, she was 27 years old when we spoke, and she reflected on the harsh reality faced by many young women in her community: six out of ten girls become pregnant by 21, entering a world where job prospects are already scarce and single motherhood makes them even more elusive.

    “Behind almost every AI decision, a human is tasked with making the final call and bearing the burden of judgment — not some silicon-based savior.”

    When Sama came recruiting, promising a better life for her and her child, Ranta saw it as her ticket to a brighter future. She applied and soon found herself in Nairobi, far from everything familiar. The promises quickly unraveled upon her arrival. Support for reuniting with her child, whom she had left behind in South Africa, never materialized as promised.

    When she inquired about this, company representatives told her that they could no longer cover the full cost as initially promised, and offered only partial support, to be deducted from her pay. Attempts to get an official audience with Sama were unsuccessful, with unofficial sources citing the ongoing legal proceedings with workers as the reason.

    When Ranta’s sister died, she said her boss gave her a few days off but wouldn’t let her switch to less traumatic content streams when she returned to moderating content — even though there was an opening. It was as if they expected her and other workers to operate like machines, capable of switching off one program and booting up another at will.

    Things came to a head during a complicated pregnancy. She wasn’t allowed to stay on bedrest as ordered by her doctor, and then just four months after giving birth to her second daughter, the infant was hospitalized.

    She then learned that the company had stopped making health insurance contributions shortly after she started working, despite continued deductions from her paycheck. Now she was saddled with bills she couldn’t afford to pay. 

    Ranta’s role involved moderating content related to female sexual abuse, xenophobia, hate speech, racism and domestic violence, mostly from her native South Africa and Nigeria. While she appreciated the importance of her job, she lamented the lack of adequate psychological counseling, training and support.

     Ranta found herself losing faith in humanity. “I saw things that I never thought possible,” she told me. “How can human beings claim to be the intelligent species after what I’ve seen?”

    Sama’s CEO has expressed regret over signing the content moderation contract with Meta. A Meta spokesperson said they require all partner companies to provide “24/7 on-site support with trained practitioners, an on-call service, and access to private healthcare from the first day of employment.”

    The representative also said it offered “’technical solutions to limit exposure to graphic material as much as possible.” However, the experiences shared by workers like Abrha, Kings, and Ranta paint a starkly different picture, suggesting a significant gap between Meta’s stated policies and the lived realities of content moderators.

    Global perspectives: Similar struggles across borders

    The experiences of Abrha, Kings and Ranta are not isolated incidents. In Kenya alone, I spoke to more than 20 workers who shared similar stories. Across the globe, in countries like Germany, Venezuela, Colombia, Syria and Lebanon, data workers we spoke to as part of our Data Workers Inquiry project told us they faced similar challenges.

    In Germany, despite all its programs to help new arrivals, immigrants with uncertain status still end up in roles like Abrha’s, reviewing content from their home countries. These workers’ precarious visa situations added a layer of vulnerability. Many told us that despite facing exploitation, they felt unable to speak out publicly. Because their employment is tied to their visas, the risk of being fired and deported looms.

    In Venezuela and Colombia, economic instability drives many to seek work in the data industry. While not always directly involved in content moderation, many data annotators often work with challenging datasets that can negatively impact their mental well-being. 

    Reality often doesn’t match what was advertised. Even if data workers in Syria and Syrian refugees in Lebanon aren’t moderating content, their work often intersects with digital remnants of the conflict they’ve experienced or fled, adding a layer of emotional strain to their already demanding jobs.

    The widespread use of Non-Disclosure Agreements (NDAs) is yet another layer in the uneven power dynamic involving such vulnerable individuals. These agreements, required as part of workers’ employment contracts, silence workers and keep their struggles hidden from public view.

    The implied threat of these NDAs often extends beyond the period of employment, casting a long shadow over the workers’ lives even after they leave their jobs. Many workers who spoke to us insisted on anonymity out of fear of legal repercussions.

    These workers, in places like Bogotá, Berlin, Caracas and Damascus, reported feeling abandoned by the companies profiting off their labor. The so-called “wellness programs” offered by Sama were often ill-equipped to address the deep-seated trauma these workers were experiencing, employees told me.

    “We were the janitors of the internet. We cleaned up the mess so everyone else can enjoy a sanitized online world.”

    — Botlhokwa Ranta

    Their stories make clear that behind the sleek facade of our digital world lies a hidden workforce that bears immense emotional burdens, so we don’t have to. Their experiences raise urgent questions about the ethical implications of data work and the human cost of maintaining our digital infrastructure. The global nature of this issue underscores a troubling truth: The exploitation of data workers is not a bug, it’s a systemic feature of the industry.

    It’s a global web of struggle, spun by tech giants and maintained by the silence of those trapped within it, as documented by Mophat Okinyi and Richard Mathenge, former content moderators and now co-researchers in our Data Workers’ Inquiry project. The two have seen these patterns repeat across a slew of different companies in multiple countries. Their experiences, both as workers and now as advocates, underscore the global nature of this exploitation.

    The Trauma Behind the Screen

    Before I traveled to Kenya, I thought I understood the challenges data workers face through my conversations with some online. However, upon arrival, I was confronted with stories of individual and institutional depravity that left me with secondary trauma and nightmares for weeks. But for the data workers themselves, their trauma manifests in two primary ways: direct trauma from the job itself and systemic issues that compound the trauma.

    1. Direct Trauma 

    Every day, content moderators are forced to confront the darkest corners of humanity. They wade through a toxic swamp of violence, hate speech, sexual abuse and graphic imagery. 

    This constant exposure to disturbing content takes a toll. “It goes beyond what makes people human,” Kings told me. “It’s like being forced to drink poison every day, knowing it’s killing you, but you can’t stop because it’s your job.” The images and videos linger after work, haunting their dreams and infiltrating their personal lives.

    Many moderators report symptoms of post-traumatic stress and vicarious trauma: nightmares, flashbacks and severe anxiety are common. Some develop a deep-seated mistrust of the world around them, forever changed by the constant exposure to human cruelty. As one worker told me, “I came into this job believing in the basic goodness of people. Now, I’m not sure I believe in anything anymore. If people can do this, then what’s there to believe?”

    When the shift ends, trauma follows these workers home. For Kings and Okinyi, like so many others, their relationships crumbled under the weight of what they saw but could not speak of. Children grow up with emotionally distant parents, partners become estranged, and the worker is left isolated in their pain.

    Many moderators report a fundamental shift in their worldview. They become hypervigilant, seeing potential threats everywhere. Okinyi mentioned how one of his former colleagues had to move from the city to the less crowded countryside due to paranoia over potential outbursts of violence. In a zine she created for the Data Workers Inquiry about Sama’s female content moderators, one of Ranta’s interviewees spoke of how the job made her constantly question her worth and ability to mother her children. 

    2. Systemic Issues

    Beyond the immediate trauma of the content itself, moderators face a barrage of systemic issues that exacerbate their suffering:

    • Job Insecurity: Many moderators, especially those in precarious living situations like refugees or economic migrants, live in constant fear of losing their jobs. This fear often prevents them from speaking out about their working conditions or seeking help. Companies often exploit this vulnerability.
    • Lack of Mental Health Support: While companies tout their wellness programs, the reality falls far short. As Kings experienced, the counseling provided is often inadequate, with therapists ill-equipped to handle the unique trauma of content moderation. Sessions are often brief and fail to address more underlying, deep-seated trauma.
    • Unrealistic Performance Metrics: Moderators often must review hundreds of pieces of content per hour. This relentless pace leaves no time to process the disturbing material they’ve seen, forcing them to bottle up their emotions. The focus on quantity over quality not only affects the accuracy of moderation but also exacerbates the psychological toll of the work. As Abrha told me: “Imagine being expected to watch a video of someone being killed, and then immediately move on to the next post. There’s no time to breathe, let alone process what we’ve seen.”
    • Constant Surveillance: As if the content itself wasn’t stressful enough, moderators are constantly monitored. Practically every decision and essentially every second of their shift is scrutinized, adding another layer of pressure to an already overwhelming job. This surveillance extends to bathroom breaks, idle time between tasks and even facial expressions while reviewing content. Supervisors monitor workers through computer tracking software, cameras, and in some cases, physical observation. They tend to pay attention to facial expressions to gauge workers’ reactions and ensure they maintain a level of detachment or “professionalism” while reviewing disturbing content. As a result, workers told me they felt like they couldn’t even react naturally to the disturbing content they were viewing. Workers were given an hour of break time daily for all their extraneous needs — eating, stretching, the bathroom — any additional time engaged in those or other non-work activities would be scrutinized and time would be added to their shifts. Abrha also mentioned that workers had to put their phones in lockers, further isolating them and limiting their ability to communicate with the outside world during their shifts.

    “The exploitation of data workers is not a bug, it’s a systemic feature of the industry.”

    And the ripples extend beyond the family: Friends drift away, unable to relate to the moderator’s new, darker perspective on life; social interactions become strained, as workers struggle to engage in “normal” conversations after spending their days immersed in the worst of human behavior.

    In essence, the trauma of content moderation reshapes entire family dynamics and social networks, creating a cycle of isolation and suffering that extends far beyond the individual.

    Traumatizing Humans To Create “Intelligent” Systems

    Perhaps the cruelest irony is that we’re traumatizing people to create the illusion of machine intelligence. The trauma inflicted on human moderators is justified by the promise of future AI systems that will not require human intervention. Yet, their development requires more human labor and often the sacrifice of workers’ mental health.

    Moreover, the focus on AI development often diverts resources and attention from improving conditions for human workers. Companies invest billions in machine learning algorithms while neglecting the basic mental health needs of their human moderators.

    The AI illusion distances users from the reality of content moderation, much like factory farming distances us from the treatment of egg-laying chickens. This collective willful ignorance allows exploitation to continue unchecked. The AI narrative is a smokescreen that obscures a deeply unethical labor practice that trades human well-being for a facade of technological progress.

    Digital Workers Of The World Rise!

    In the face of exploitation and trauma, data workers have not been passive. Across the globe, workers have attempted to unionize, but their efforts have often been hindered by various actors. In Kenya, workers formed the African Content Moderators Union, an ambitious effort to unite workers from different African countries.

    Mathenge, who is also part of the union’s leadership, told me he believes he was dismissed from his role as a team lead due to his union activities. This retaliation sent a chilling message to other workers who were considering organizing.

    The struggle for workers’ rights recently gained significant legal traction. On Sept. 20, a Kenyan court ruled that Meta could be sued there for dismissing dozens of content moderators by its contractor, Sama. The court upheld earlier rulings that Meta could face trial over these dismissals and could be sued in Kenya over alleged poor working conditions. 

    The latest ruling has potentially far-reaching implications for how the tech giant works with its content moderators globally. It also marks a significant step forward in the ongoing battle for fair treatment and recognition of data workers’ rights.

    The obstacles continue beyond the company level. Organizations employ union-busting tactics, often firing workers who agitate for unionization, Mathenge said. During conversations with workers, journalists and civil society officials in the Kenyan digital labor space, whispers of senior government officials demanding bribes to formally register the union emerged, adding another layer of complexity to the unionization process.

    Perhaps most bizarrely, according to an official from the youth-led civic organization Siasa Place, when workers in Kenya attempted to form their own union, they were instead told to join the postal and telecommunication union, a suggestion that ignores the vast differences between these industries and the unique challenges faced by today’s data workers.

    Despite these setbacks, workers have continued to find innovative ways to organize and advocate for their rights. Okinyi, together with Mathenge and Kings formed the Techworker Community Africa, a non-governmental organization focused on lobbying against harmful tech practices like labor exploitation.

    Other organizations have also stepped up to help the workers, like Siasa Place, and digital rights lawyers like Mercy Mutemi have petitioned the Kenyan parliament to investigate the working conditions at AI firms.

    A Path To Ethical AI & Fair Labor Practices

    Industry-wide Mental Health Protocols

    We need a comprehensive, industry-wide approach to mental health support. Based on my research and conversations with workers, I propose a multi-faceted approach not offered by existing support systems.

    Many existing company programs are often superficial “wellness programs” that fail to address the deep-seated trauma experienced by data workers. These may include occasional group sessions or access to general counseling services, but they are typically insufficient and not tailored.

    My proposed approach includes mandatory, regular counseling sessions with therapists trained specifically in trauma related to data work. Additionally, companies should implement regular mental health check-ins, provide access to 24/7 crisis support, and offer long-term therapy services, which are largely absent in current setups.

    Crucially, these services must be culturally competent, recognizing the diverse backgrounds of data workers globally. This is a significant departure from the current one-size-fits-all approach that often fails to consider the cultural contexts of workers in places like Nairobi, Manila or Bogotá. The proposed system would offer support in workers’ native languages and be sensitive to cultural nuances surrounding mental health — aspects sorely lacking in many existing programs.

    “Companies invest billions in machine learning algorithms while neglecting the basic mental health needs of their human moderators.”

    Moreover, unlike the current system where mental health support often ends with employment, this new approach would extend support beyond the tenure of the job, acknowledging the long-lasting impacts of this work. This comprehensive, long-term and culturally-sensitive approach represents a fundamental shift from the current tokenistic and often ineffective mental health support offered to data workers.

    “Trauma Cap” Implementation

    Just as we have radiation exposure limits for nuclear workers, we need trauma exposure limits for data workers. This “trauma cap” would set strict limits on the amount and type of disturbing content a worker can be exposed to within a given timeframe.

    Implementation could involve rotating workers between high-impact and low-impact content, mandatory breaks after exposure to particularly traumatic material, limits on consecutive days working with disturbing content and the allocation of annual “trauma leave” for mental health recovery.

    We need a system that tracks not just the quantity of content reviewed, but one that accounts for emotional impact. For example, a video of extreme violence should count more toward a worker’s cap than a spam post.

    Independent Oversight Body

    Self-regulation by tech companies has proven insufficient; it’s essentially entrusting a jackal with the chicken coop. We need an independent body with the power to audit, enforce standards and impose penalties when necessary.

    This oversight body should consist of ethicists, former data workers, mental health professionals and human rights experts. It should have the authority to conduct unannounced inspections of data work facilities, set and enforce industry-wide standards for working conditions and mental health support, and provide a safe channel for workers to report violations without fear of retaliation. Crucially, any oversight body must include the voices of current and former data workers who truly understand the challenges of such work.

    The Role Of Consumers & The Public In Demanding Change

    While industry reforms and regulatory oversight are crucial, the power of public pressure cannot be overstated. As consumers of digital content and participants in online spaces, we all have a role to play in demanding more ethical practices. This involves informed consumption, educating ourselves about the human cost behind content moderation.

    Before sharing content, especially potentially disturbing material, we should consider the moderator who might have to review it. This awareness might influence our decisions about what we post or share. We must demand transparency from tech companies about their content moderation practices.

    We can use companies’ own platforms to hold them accountable by publicly asking questions about worker conditions and mental health support. We should support companies that prioritize ethical labor practices and consider boycotting those that don’t.

    Moreover, as AI tools become increasingly prevalent in our digital landscape, we must also educate ourselves about the hidden costs behind these seemingly miraculous technologies. Tools like ChatGPT and DALL-E are the product of immense human labor and ethical compromises.

    These AI systems are built on the backs of countless invisible individuals: content moderators exposed to traumatic material, data labelers working long hours for low wages and artists whose creative works have been exploited without consent or compensation. In addition to the staggering human cost, the environmental toll of these technologies is alarming and often overlooked.

    From the massive energy consumption of data centers to the mountains of electronic waste generated, the ecological footprint of AI is a critical issue that demands our immediate attention and action. By understanding these realities, we can make more informed choices about the AI tools we use and advocate for fair compensation and recognition of the human labor that makes them possible.

    Political action is equally important. We need to advocate for legislation that protects data workers, urge our political representatives to regulate the tech industry, and support political candidates who prioritize digital ethics and fair labor practices.

    It’s crucial to spread awareness about the realities of data work through use of our platforms so that we can inform people about the stories of people like Abrha, Kings, and Ranta and encourage discussions about the ethical implications of our digital consumption.

    We can follow and support organizations like the African Content Moderators Union and NGOs focused on digital labor rights and amplify the voices of data workers speaking out about their experiences to help bring about meaningful change.

    Most people have no idea what goes on behind their sanitized social media feeds and the AI tools they use daily. If they knew, I believe they would demand change. Public support is necessary to ensure the voices of data workers are heard.

    By implementing these solutions and harnessing the power of public demand, we can work toward a future where the digital world we enjoy doesn’t come at the cost of human dignity and mental health. It’s a challenging path, but one we must traverse if we are to create a truly ethical digital ecosystem.

    By Adio Dinika - From NoemaMag

    THỐNG KÊ TRUY CẬP
    • Đang online 16
    • Truy cập tuần 3606
    • Truy cập tháng 9136
    • Tổng truy cập 243163