The Human Cost Of Our AI-Driven Future

22 Trần Quốc Toản, Phường Võ Thị Sáu, Quận 3, Tp.HCM
Tiêu điểm
CN & MT: Robot Trung Quốc – 'Gót chân Achilles' trong giấc mơ công nghiệp của Mỹ CN & MT: Bằng chứng không thể chối cãi rằng thị trường xe điện đang "xì hơi" Tiền Tệ : Giật mình số 'vốn chết' khổng lồ trong nền kinh tế VH & TG: Chuyện lạ ở nền kinh tế lớn thứ 4 thế giới: 5 triệu cụ già trên 70 tuổi vẫn đi làm kiếm sống, 60 tuổi được coi là 'trẻ' CN & MT: Sự trở lại của Chiến tranh giữa các vì sao BĐS: THỊ TRƯỜNG BẤT ĐỘNG SẢN THÁNG 4/2025: GIẢM NHIỆT RỘ RỘT SAU “SÓNG NGẮN” Tin tức: Thấy gì từ việc Qatar tặng ông Trump máy bay 400 triệu USD? Tin tức: CẦN GIỜ – “TÂN SÀI GÒN CITY”: TẦM NHÌN MỚI CHO MỘT KỶ NGUYÊN MỚI Tiền Tệ : Đô la tiếp tục bị bán mạnh Tin tức: Quỹ đầu tư Mỹ sở hữu hơn 1.600 tỷ USD thay thế Dragon Capital, trở thành cổ đông lớn tại PNJ BĐS: 10 chỉ đạo “nóng” của Thủ tướng cho thị trường bất động sản CN & MT: Homo numericus: Con người trong kỷ nguyên số VH & TG: Tổng thống Donald Trump phát biểu tại lễ tốt nghiệp tại Học viện Quân sự West Point Tin tức: Nhà nước cần giữ quyền kiểm soát trong đầu tư đường sắt cao tốc CN & MT: Câu chuyện công nghệ : kết nối vật lý - người ( ngũ quan : nhãn ,nhĩ ,tỉ,thiệt ,thân )  Tiền Tệ : Làn sóng tháo chạy khỏi trái phiếu dài hạn tại các nước phát triển đang tăng tốc Tin tức: Novaland chưa đủ tiền để trả nợ cho đến cuối năm 2026 Chứng khoán: Tài sản Mỹ bị bán tháo, dòng vốn toàn cầu sẽ chuyển sang thị trường mới nổi? CN & MT: NHỮNG NGƯỜI KHỔNG LỒ VỀ CÔNG NGHỆ ĐÃ NẮM LẤY SỰ KIỂM SOÁT? Tiền Tệ : Nợ xấu ngân hàng trên 4% CN & MT: Trung Quốc trải qua nắng nóng cực đoan, nhiệt độ mặt đất cao khủng khiếp CN & MT: Một nguồn năng lượng đang nằm im dưới chân chúng ta - và nó có thể nuôi sống nhân loại trong 170.000 năm Tin tức: Lo ngại kho tên lửa của Nga - Trung Quốc, Mỹ thúc đẩy lá chắn "Vòm Vàng" Chứng khoán: Nhóm quỹ Dragon Capital đã bán ra gần 6 triệu cổ phiếu của Tổng CTCP Dịch vụ Kỹ thuật Dầu khí Việt Nam (PVS) BĐS: Đất bỏ hoang chục năm không xây nhà để chờ giá tăng, luật này ra đời sẽ chặn đứng nạn đầu cơ CN & MT: Câu chuyện công nghệ BLOCKCHAIN CN & MT: Nhật Bản đã lắp đặt tuabin năng lượng thủy triều ở cấp độ megawatt đầu tiên CN & MT: 60% người Trung Quốc đi xe điện CN & MT: Tuyên bố mới nhất của Musk về Trung Quốc Tin tức: Ngân hàng KEXIM làm việc với tỉnh Long An VH & TG: Loài người sẽ tiến hóa thành Homo Numericus? Tin tức: THỦ PHỦ CỦA LONG AN TRƯỚC CUỘC SÁP NHẬP THÀNH TÂY NINH MỚI SK & Đời Sống: Quá nhiều Gen Z đang làm “người chuột”: Chỉ nằm im và chìm trong nỗi buồn vô tận Thư Giản: Người đàn ông chi hơn 273 triệu đồng mua hòn đảo 99.000m2 để nghỉ hưu, 42 năm sau giá tăng lên 1.200 tỷ vẫn từ chối bán: "Thứ tôi muốn không phải là tiền" SK & Đời Sống: Nghiên cứu: Thiền định có thể làm giảm mạnh tỷ lệ tội phạm quốc gia Bạn đang sao chép nội dung của Trí Thức VN. Nếu là cá nhân sử dụng, vui lòng ghi rõ nguồn trithucvn2.net. Nếu là website, kênh truyền thông, vui lòng chỉ sử dụng nội dung khi có sự cho BĐS: Hơn 32.000 căn nhà ở xã hội gần TP.HCM trong kế hoạch xây dựng năm 2025 của Long An SK & Đời Sống: 1 trung tâm thương mại từng "ngủ yên" giữa Quận 1 bỗng đông nghịt khách chỉ vì 4 thương hiệu đến từ Nhật Bản : Nếu không sửa luật, dự án bất động sản sẽ tắc trong 10 năm tới Tin tức: MỘT TRĂM NGÀY CỦA DONALD J. TRUMP: ĐO LƯỜNG CƠN ĐỊA CHẤN CỦA MỘT CHẾ ĐỘ TỔNG THỐNG ĐẾ CHẾ VH & TG: NGA KHÔNG THỂ LÀ CƯỜNG QUỐC NẾU THIẾU UKRAINE Tin tức: Đề xuất làm 8 cống âu thuyền, biến sông Tiền và sông Hậu thành "hồ nước ngọt" khổng lồ tại miền Tây Tin tức: Burberry cắt giảm gần 1/5 lực lượng lao động trong nỗ lực cắt giảm chi phí BĐS: Về “chung nhà” với TP.HCM, thị trường bất động sản hai địa phương này tăng gần 50% VH & TG: Nếu đồng đô la Mỹ sụp đổ CN & MT: Câu chuyện công nghệ thế kỷ 21 CN & MT: ChatGPT dự đoán văn minh nhân loại sụp đổ vào năm 2150, nhưng không phải do AI Tin tức: Bộ trưởng Bessent: Mỹ sẽ tiếp tục đàm phán với Trung Quốc trong vài tuần tới Tin tức: Ông Trump gây sức ép với nhà lãnh đạo Syria về quan hệ với Israel sau khi dỡ bỏ các biện pháp trừng phạt Tin tức: Kinh doanh Sau sáp nhập, TP.HCM có thể tiệm cận Bangkok, Singapore? Tin tức: Sáu trụ cột định hình chiến lược an ninh quốc gia Mỹ Tin tức: “Thế giới không còn phẳng” BĐS: Dòng tiền đổ về vùng ven CN & MT: Hàng loạt tập đoàn đang thay thế lao động bằng AI: không bảo hiểm, không nghỉ phép, không tăng lương Tin tức: TOD giúp định hình lại không gian đô thị TP.HCM Tin tức: Singapore: Lựa chọn ổn định giữa thế giới bất định Tin tức: Ông Bessent dùng đường minh họa fentanyl, Bắc Kinh nói ma túy là 'chuyện của riêng Mỹ' Tin tức: Kinh tế Mỹ trước ngã rẽ mới Tin tức: TP.HCM TRƯỚC NGƯỠNG CỬA TRỞ THÀNH ĐÔ THỊ DỊCH VỤ HÀNG ĐẦU Tiền Tệ : Tỷ lệ nợ xấu nhiều ngân hàng gần về mức đỉnh nhưng không tích cực trích lập dự phòng? BĐS: Cuộc “ly tâm” mạnh mẽ của nhà phố - biệt thự, lý do ít ai ngờ Tin tức: TỰ LỰC, TỰ CƯỜNG: CON ĐƯỜNG DUY NHẤT ĐỂ VIỆT NAM BƯỚC VÀO KỶ NGUYÊN MỚI DƯỚI SỰ LÃNH ĐẠO CỦA TỔNG BÍ THƯ TÔ LÂM VH & TG: Những cơn gió ngược kinh tế của Mỹ sẽ lấn át Trump và thuế quan của ông CN & MT: Elon Musk: Tàu Starship sẵn sàng chinh phục Sao Hỏa vào năm 2026 CN & MT: Tan nát gia đình vì ChatGPT VH & TG: Mỹ: Giàu nhất thế giới, nhưng khoảng cách giàu nghèo cũng thuộc hàng "khủng"? BĐS: Sóng giảm giá lan rộng, giới đầu cơ chung cư sắp phải lao vào cuộc đua ‘cắt lãi’ để thoát hàng? Chứng khoán: Cần nhìn xa hơn một cuộc chiến thương mại 4 - 5 năm VH & TG: 1493: LỊCH SỬ THƯƠNG MẠI TRUNG QUỐC – PHƯƠNG TÂY VÀ BÓNG DÁNG TRẬN CHIẾN THUẾ QUAN MỸ - TRUNG HIỆN NAY VH & TG: Địa chính trị của thuế quan VH & TG: Adam Smith, Kinh tế, Tài chính và Địa chính trị CN & MT: Khởi công nhà máy chip bán dẫn đầu tiên do người Việt làm chủ công nghệ BĐS: Bất kể một tòa nhà có bao nhiêu tầng, các kiến ​​trúc sư đều nói rằng 5 tầng này là 'tầng vàng'. Hãy kiểm tra chúng trước khi mua CN & MT: [TRUNG QUỐC DẪN ĐẦU CUỘC ĐUA 5G – 6G, MỸ CẦN TĂNG CƯỜNG NĂNG LỰC CẠNH TRANH] VH & TG: [ĐƯỜNG KIỂM SOÁT (LINE OF CONTROL): KỶ NGUYÊN MỚI TRONG QUAN HỆ MỸ - TRUNG QUỐC] BĐS: Biệt thự, nhà liền kề tại TP HCM 'ế' vì đắt đỏ VH & TG: Xin đừng lỡ hẹn trăm năm! BĐS: Vì sao biệt thự, nhà liền kề tại TP. HCM lại 'ế'? VH & TG: DÂN SỐ SUY GIẢM NGHIÊM TRỌNG: BÀI TOÁN NAN GIẢI CỦA TRUNG QUỐC BĐS: Thị trường bất động sản TPHCM đón cơ hội từ xu hướng giãn dân hậu sáp nhập VH & TG: Dương Văn Minh và những ngày cuối tháng 4.1975 Chứng khoán: Hàng tồn kho cao ngất ngưởng, bất động sản chật vật trả nợ trái phiếu Tin tức: Vinhomes, Novaland, Khang Điền, Nam Long, Phát Đạt… đang làm ăn ra sao? BĐS: TIỀN THUÊ MẶT BẰNG KIỂU NÀY LẤY ĐÂU RA LÃI, CUỐI CÙNG KHÁCH HÀNG LÀ NGƯỜI CHỊU CÙNG Tin tức: Tình trạng đình lạm(*) của thời đại hiện nay  VH & TG: Chiến tranh tại Ukraina đã thay đổi toàn bộ những gì loài người biết về chiến tranh như thế nào? Thư Giản: BỨC ẢNH CUỐI CÙNG GỬI VỀ TỪ SAO KIM 1982  SK & Đời Sống: Người Mỹ, Pháp và nhiều quốc gia hạnh phúc nhất thế giới ngày càng chuộng sống ở ngoại ô, người Việt cũng không ngoại lệ VH & TG: Chuyện gì đang xảy ra ở Mỹ: Nghiên cứu mới cho thấy 25% người Mỹ giàu nhất chỉ sống thọ bằng 25% người nghèo nhất Tây Âu? Chứng khoán: JPMorgan Chase: Nguy cơ suy thoái kinh tế Mỹ gần 80% Chứng khoán: Chuyên gia cảnh báo về khả năng sụp đổ của thị trường giống như năm 1987 Thư Giản: NGƯỜI HÀNG XÓM KHÔNG BÌNH THƯỜNG SK & Đời Sống: Trưởng thành - chiếc áo quá rộng với thế hệ Y? SK & Đời Sống: 10 LƯU Ý KHI MUA LẠI HÀNG QUÁN MÀ CHỦ QUÁN NÊN BIẾT  SK & Đời Sống: -Food For Thought- Tiền Tệ : Lịch Sử và Chu Kỳ của giá Vàng BĐS: “Hoang mang” những con phố thời trang của Sài Gòn SK & Đời Sống: Thế hệ bất hạnh nhất SK & Đời Sống: Tin tức sáng 30-3: Tốc độ già hóa dân số Việt Nam nhanh nhất châu Á, TP.HCM già nhanh nhất nước Thư Giản: Millennials - thế hệ kẹt giữa gen X và gen Z: Vì sao chúng ta khác biệt? Tiền Tệ : Kinh tế - Chính trị Kinh tế Việt Nam từ 2010 đến 2023 và con đường phía trước (phần B) Tiền Tệ : Kinh tế Việt Nam từ 2010 đến 2023 và con đường phía trước (phần A) BĐS: NÊN ĐẦU TƯ HAY ĐỨNG NGOÀI QUAN SÁT? SK & Đời Sống:  BƯỚC ĐỂ MỞ MỘT QUÁN CAFE CÓC "ÍT VỐN, NHIỀU LỜI" SK & Đời Sống: Con người, nếu không có tiền! SK & Đời Sống: Vì sao nhiều người trúng giải độc đắc giàu nhanh nhưng lại dễ “tan cửa nát nhà”, rơi vào bi kịch nghèo vẫn hoàn nghèo? SK & Đời Sống: Hỏi DeepSeek, ChatGPT "Đang thất nghiệp làm gì để kiếm ra tiền": AI phân tích kỹ càng, đưa ra câu trả lời cực bất ngờ khiến nhiều người tỉnh ngộ SK & Đời Sống: Thử luận cách chữa huyết áp CAO và THẤP: Tin tức: Ông Trump bổ sung thuế với TQ, chứng khoán Thượng Hải, Thâm Quyến và Hồng Kông lao dốc VH & TG: Liệu Trump có gây ra “sự sụp đổ” thứ tám? Thư Giản: NĂM CHỮ CỦA NGƯỜI XƯA SK & Đời Sống: 60 TUỔI TRỞ LÊN, BẠN DỰA VÀO AI?  Thư Giản: Bức thư của nhà khoa học Newton năm 1704 tiên đoán về ngày tận thế VH & TG: Buồn của nền kinh tế lớn thứ hai thế giới: Dân số 1,4 tỷ người nhưng thiếu lao động trầm trọng ở cơ sở y tế nông thôn, bác sĩ lương tháng 3,4 triệu đồng, tự bỏ tiền túi mua thiết bị Tin tức: Ngẫm bài học tăng trưởng từ Trung Quốc và Ấn Độ Thư Giản: Ước vọng thay đổi Chứng khoán: Chứng khoán Việt Nam sau tròn 3 năm sau lập đỉnh lịch sử: Có thêm gần 5 triệu tài khoản, VN-Index “bốc hơi” 300 điểm Tin tức: Kinh tế Trung Quốc giai đoạn mới và hàm ý cho Việt Nam Tiền Tệ : TP. Hồ Chí Minh: Tiền gửi vào hệ thống ngân hàng đạt hơn 4 triệu tỷ đồng Tiền Tệ : Mô hình kinh tế hiện đại đã thất bại như thế nào? Tin tức: Ukraine 'khóa van', kỷ nguyên khí đốt của Nga tại châu Âu kết thúc Tiền Tệ : Chính sách tiền tệ năm 2025 sẽ đối mặt với không ít thách thức Chứng khoán: VinaCapital: 2025 có thể là năm biến động đối với thị trường chứng khoán và nền kinh tế Tiền Tệ : Quyết định hạ lãi suất của Fed có thể 'giáng đòn' lên hàng loạt NHTW trên toàn cầu như thế nào? VH & TG: NGỘ 12 LUẬT NHÂN QUẢ BẤT BIẾN TRONG CUỘC ĐỜI Chứng khoán: "Chỉ báo Warren Buffett" cao chưa từng có trong lịch sử, gióng hồi chuông cảnh báo nhà đầu tư về mối nguy của TTCK Mỹ Chứng khoán: Chủ tịch FiinGroup: Hầu hết đầu tư cá nhân đang chịu lỗ VH & TG: Tỷ phú Elon Musk nói thẳng 1 ĐIỀU càng cố tỏ ra hoàn hảo thì con người càng kém giá trị: Tránh được sớm sẽ giàu sớm Chứng khoán: Nỗi buồn chưa từng có của thị trường chứng khoán Việt Nam: Con số kỷ lục trong hơn 24 năm hoạt động Tin tức: Thế chiến thứ III đã bắt đầu? VH & TG: Đại lão Hòa thượng Hộ Tông Vansarakkhita (1893-1981) Tin tức: CÁI GIÁ CỦA CHIẾN TRANH 2024 2025 Tin tức: Thế giới đối mặt cùng lúc 5 căn nguyên của thảm họa và nguy cơ Thế chiến III CN & MT: "Báo động đỏ" về khí hậu VH & TG: Nghiên cứu 75 năm của ĐH Harvard: Đây là KIỂU NGƯỜI hạnh phúc nhất, không liên quan gì đến giàu sang, danh vọng! Tin tức: Phố nhậu xập xình nhất TPHCM ế vêu, chủ quán ngồi chờ… dẹp tiệm Tin tức:  2050 Nhân loại đang ở ngã ba đường Tin tức: 20 rủi ro toàn cầu lớn nhất năm 2024, suy thoái kinh tế và thời tiết cực đoan nằm top đầu VH & TG: Câu chuyện Chúa Giê Su ‘sang Phương Đông tu tập’ được kể lại ra sao? SK & Đời Sống: Giáo sư từng đoạt giải Nobel suốt đời tuân theo 6 điều, bảo sao sống thọ 101 tuổi: Tập thể dục hay uống nước cũng gác lại sau VH & TG: Henry Kissinger: Làm thế nào để tránh xảy ra Thế chiến 3? (P1) Tin tức: Dấu ấn ODA Nhật Bản tại Đồng bằng sông Cửu Long Tin tức: 9 vấn đề định hình nền kinh tế lớn nhất thế giới vào năm 2024: Từ lạm phát, tăng trưởng GDP đến TikTok, ChatGPT CN & MT: Năng lượng và biến đổi khí hậu Tin tức: Trật tự thế giới mới sẽ như thế nào 2025 2050 ? Tin tức: Trung Quốc chấm dứt 30 năm phát triển mạnh, hết thời làm mưa làm gió trên thế giới? CN & MT: Châu Âu: Thế thượng phong của ô tô điện - bao lâu nữa? Tin tức: Hệ lụy gì từ cuộc chiến mới ở Trung Đông? Tin tức: Khủng hoảng tại WTO và cảnh báo về sự phân mảnh của kinh tế toàn cầu Tin tức: Dự báo rủi ro lạm phát dai dẳng ở Mỹ Tin tức: Trump làm tổng thống Mỹ Thế giới bắt đầu thời kỳ cấu trúc lại trật tư thế giới The World Begins to Reorder Itself Tin tức: IMF: Triển vọng kinh tế thế giới mấy năm tới chỉ ở “hạng xoàng”
Bài viết
The Human Cost Of Our AI-Driven Future

    Behind AI’s rapid advance and our sanitized feeds, an invisible global workforce endures unimaginable trauma.

    Velvet Spectrum for Noema Magazine

    A blurred screen flashes before our eyes, accompanied by a deceptively innocuous “sensitive content” message with a crossed-out eye emoji. The warning’s soft design and playful icon belie the gravity of what lies beneath. With a casual flick of our fingers, we scroll past, our feeds refreshing with cat videos and vacation photos. But in the shadows of our digital utopia, a different reality unfolds.

    In cramped, poorly lit warehouses around the world, an army of invisible workers hunches over flickering screens. Their eyes strain, fingers hovering over keyboards, as they confront humanity’s darkest impulses — some darker than their wildest nightmares. They cannot look away. They cannot scroll past. For these workers, there is no trigger warning.

    Tech giants trumpet the power of AI in content moderation, painting pictures of omniscient algorithms keeping our digital spaces safe. They suggest a utopian vision of machines tirelessly sifting through digital detritus, protecting us from the worst of the web.

    But this is a comforting lie.

    The reality is far more human and far more troubling. This narrative serves multiple purposes: it assuages user concerns about online safety, justifies the enormous profits these companies reap and deflects responsibility — after all, how can you blame an algorithm?

    However, current AI systems are nowhere near capable of understanding the nuances of human communication, let alone making complex ethical judgments about content. Sarcasm, cultural context and subtle forms of hate speech often slip through the cracks of even the most sophisticated algorithms.

    And while automated content moderation can, to a degree, be implemented for more mainstream languages, content in low-resourced languages typically requires recruiting content moderators from those countries where it is spoken for their language abilities. 

    Behind almost every AI decision, a human is tasked with making the final call and bearing the burden of judgment — not some silicon-based savior. AI is often a crude first filter. Take Amazon’s supposedly automated stores, for instance: It was reported by The Information that instead of advanced AI systems, Amazon relied on around 1,000 workers, primarily based in India, to manually track customers and record their purchases.

    Amazon told AP and others that they did hire workers to watch videos to validate people shopping, but denied that they had hired 1,000 or the implication that workers monitored shoppers live. Similarly, Facebook’s “AI-powered” M assistant is more human than software. And so, the illusion of AI capability is often maintained at the cost of hidden human labor.

    “We were the janitors of the internet,” Botlhokwa Ranta, 29, a former content moderator from South Africa now living in Nairobi, Kenya, told me two years after her Sama contract was terminated. Speaking from her home, her voice was  heavy as she continued. “We cleaned up the mess so everyone else can enjoy a sanitized online world.”

    And so, while we sleep, many toil. While we share, these workers shield. While we connect, they confront the disconnect between our curated online experience and the reality of raw, unfiltered human nature.

    The glossy veneer of the tech industry conceals a raw, human reality that spans the globe. From the outskirts of Nairobi to the crowded apartments of Manila, from Syrian refugee communities in Lebanon to the immigrant communities in Germany and the call centers of Casablanca, a vast network of unseen workers power our digital world. The stories of these workers are often a tapestry of trauma, exploitation and resilience, ones that reveal the true cost of our AI-driven future.

    We may marvel at the chatbots and automated systems that Sam Altman and his ilk extol, but this belies the urgent questions below the surface: Will our godlike AI systems serve as merely a smokescreen, concealing a harrowing human reality?

    In our relentless pursuit of technological advancement, we must ask: What price are we willing to pay for our digital convenience? And in this race towards an automated future, are we leaving our humanity in the dust?

    Abrha’s Story

    In February 2021, Abrha’s world shattered as his town in Tigray came under fire from both Ethiopian and Eritrean defense forces in the Tigray conflict, the deadliest modern-day conflict, which has been rightly called a genocide according to a report by the U.S.-based New Lines Institute.

    With just a small backpack and whatever cash he could grab, Abrha, then 26, fled to Nairobi, Kenya, leaving behind a thriving business, family and friends who couldn’t escape. As Tigray suffered under a more than two-year internet shutdown imposed by Ethiopia’s government, he spent months in agonizing uncertainty about his family’s fate.

    “Will our godlike AI systems serve as merely a smokescreen, concealing a harrowing human reality?”

    Then, in a cruel twist of irony, Abrha was recruited by the Kenyan branch of Sama — a San Francisco-based company that presents itself as an ethical AI training data provider, because the company needed people fluent in Tigrinya and Amharic, languages of the conflict he had just fled — to moderate content mostly originating from that same conflict.

    Five days a week, eight hours a day, Abrha sat in the Sama warehouse in Nairobi, moderating content from the very conflict he had escaped — even sometimes a bombing from his hometown. Each day brought a deluge of hate speech directed at Tigrayans, and dread that the next dead body might be his father, the next rape victim his sister.

    An ethical dilemma also weighed heavily on him: How could he remain neutral in a conflict where he and his people were the victims? How could he label retaliatory content generated by his people as hate speech? The pressure became unbearable.

    Though Abrha once abhorred smoking, he became a chain smoker who always had a cigarette in hand as he navigated this digital minefield of trauma — each puff a futile attempt to soothe the pain of his people’s suffering.

    The horror of his work reached a devastating peak when Abrha came across his cousin’s body while moderating content. It was a brutal reminder of the very real and personal stakes of the conflict he was being forced to witness daily through a computer screen.

    After he and other content moderators had their contracts terminated by Sama, Abrha found himself in a dire situation. Unable to secure another job in Nairobi, he was left to grapple with his trauma alone, without the support or resources he desperately needed. The weight of his experiences as a content moderator, coupled with the lingering effects of fleeing conflict, took a heavy toll on his mental health and financial stability.

    Despite the situation in Tigray remaining precarious in the aftermath of the war, Abrha felt he had no choice but to return to his homeland. He made the difficult journey back a few months ago, hoping to rebuild his life from the ashes of conflict and exploitation. His story serves as a stark reminder of the long-lasting impact of content moderation work and the vulnerability of those who perform it, often far from home and support systems.

    Kings’ Nightmarish Reality

    Growing up in Kibera, one of the world’s largest slums, Kings, 34, who insisted Noema solely use his first name to freely discuss personal health matters, dreamed of a better life for his young family. Like many young people raised in the Nairobi slum, he was unemployed.

    When Sama came calling, Kings saw it as his chance to break into the tech world. Starting as a data annotator,  who labeled and categorized data to train AI systems, he was thrilled despite the small pay. When the company offered to promote him to content moderator with a slight pay increase, he jumped at the opportunity, unaware of the implications of the decision.

    Kings soon found himself confronting content that haunted him day and night. The worst was what they coded as CSAM, or child sexual abuse material. Day after day, he sifted through texts, pictures and videos vividly depicting the violation of children. “I saw videos of children’s vaginas tearing from the abuse,” he recounted, his voice hollow. “Each time I closed my eyes at home, that’s all I could see.”

    The trauma infected every aspect of Kings’ life. At the age of 32, he had trouble being intimate with his wife; images of abused children plagued his mind. The company’s mental health support was grossly inadequate, Kings said. Counselors were seemingly ill-equipped to handle the depth of his trauma.

    Eventually, the strain became too much. Kings’ wife, unable to cope with the sexual deprivation and the changes in his behavior, left him. By the time Kings left Sama, he was a shell of his former self — broken both mentally and financially — his dreams of a better life shattered by a job he thought would be his salvation.

    Losing Faith In Humanity

    Ranta’s story begins in the small South African township of Diepkloof, where life moves in predictable cycles. A mother at 21, she was 27 years old when we spoke, and she reflected on the harsh reality faced by many young women in her community: six out of ten girls become pregnant by 21, entering a world where job prospects are already scarce and single motherhood makes them even more elusive.

    “Behind almost every AI decision, a human is tasked with making the final call and bearing the burden of judgment — not some silicon-based savior.”

    When Sama came recruiting, promising a better life for her and her child, Ranta saw it as her ticket to a brighter future. She applied and soon found herself in Nairobi, far from everything familiar. The promises quickly unraveled upon her arrival. Support for reuniting with her child, whom she had left behind in South Africa, never materialized as promised.

    When she inquired about this, company representatives told her that they could no longer cover the full cost as initially promised, and offered only partial support, to be deducted from her pay. Attempts to get an official audience with Sama were unsuccessful, with unofficial sources citing the ongoing legal proceedings with workers as the reason.

    When Ranta’s sister died, she said her boss gave her a few days off but wouldn’t let her switch to less traumatic content streams when she returned to moderating content — even though there was an opening. It was as if they expected her and other workers to operate like machines, capable of switching off one program and booting up another at will.

    Things came to a head during a complicated pregnancy. She wasn’t allowed to stay on bedrest as ordered by her doctor, and then just four months after giving birth to her second daughter, the infant was hospitalized.

    She then learned that the company had stopped making health insurance contributions shortly after she started working, despite continued deductions from her paycheck. Now she was saddled with bills she couldn’t afford to pay. 

    Ranta’s role involved moderating content related to female sexual abuse, xenophobia, hate speech, racism and domestic violence, mostly from her native South Africa and Nigeria. While she appreciated the importance of her job, she lamented the lack of adequate psychological counseling, training and support.

     Ranta found herself losing faith in humanity. “I saw things that I never thought possible,” she told me. “How can human beings claim to be the intelligent species after what I’ve seen?”

    Sama’s CEO has expressed regret over signing the content moderation contract with Meta. A Meta spokesperson said they require all partner companies to provide “24/7 on-site support with trained practitioners, an on-call service, and access to private healthcare from the first day of employment.”

    The representative also said it offered “’technical solutions to limit exposure to graphic material as much as possible.” However, the experiences shared by workers like Abrha, Kings, and Ranta paint a starkly different picture, suggesting a significant gap between Meta’s stated policies and the lived realities of content moderators.

    Global perspectives: Similar struggles across borders

    The experiences of Abrha, Kings and Ranta are not isolated incidents. In Kenya alone, I spoke to more than 20 workers who shared similar stories. Across the globe, in countries like Germany, Venezuela, Colombia, Syria and Lebanon, data workers we spoke to as part of our Data Workers Inquiry project told us they faced similar challenges.

    In Germany, despite all its programs to help new arrivals, immigrants with uncertain status still end up in roles like Abrha’s, reviewing content from their home countries. These workers’ precarious visa situations added a layer of vulnerability. Many told us that despite facing exploitation, they felt unable to speak out publicly. Because their employment is tied to their visas, the risk of being fired and deported looms.

    In Venezuela and Colombia, economic instability drives many to seek work in the data industry. While not always directly involved in content moderation, many data annotators often work with challenging datasets that can negatively impact their mental well-being. 

    Reality often doesn’t match what was advertised. Even if data workers in Syria and Syrian refugees in Lebanon aren’t moderating content, their work often intersects with digital remnants of the conflict they’ve experienced or fled, adding a layer of emotional strain to their already demanding jobs.

    The widespread use of Non-Disclosure Agreements (NDAs) is yet another layer in the uneven power dynamic involving such vulnerable individuals. These agreements, required as part of workers’ employment contracts, silence workers and keep their struggles hidden from public view.

    The implied threat of these NDAs often extends beyond the period of employment, casting a long shadow over the workers’ lives even after they leave their jobs. Many workers who spoke to us insisted on anonymity out of fear of legal repercussions.

    These workers, in places like Bogotá, Berlin, Caracas and Damascus, reported feeling abandoned by the companies profiting off their labor. The so-called “wellness programs” offered by Sama were often ill-equipped to address the deep-seated trauma these workers were experiencing, employees told me.

    “We were the janitors of the internet. We cleaned up the mess so everyone else can enjoy a sanitized online world.”

    — Botlhokwa Ranta

    Their stories make clear that behind the sleek facade of our digital world lies a hidden workforce that bears immense emotional burdens, so we don’t have to. Their experiences raise urgent questions about the ethical implications of data work and the human cost of maintaining our digital infrastructure. The global nature of this issue underscores a troubling truth: The exploitation of data workers is not a bug, it’s a systemic feature of the industry.

    It’s a global web of struggle, spun by tech giants and maintained by the silence of those trapped within it, as documented by Mophat Okinyi and Richard Mathenge, former content moderators and now co-researchers in our Data Workers’ Inquiry project. The two have seen these patterns repeat across a slew of different companies in multiple countries. Their experiences, both as workers and now as advocates, underscore the global nature of this exploitation.

    The Trauma Behind the Screen

    Before I traveled to Kenya, I thought I understood the challenges data workers face through my conversations with some online. However, upon arrival, I was confronted with stories of individual and institutional depravity that left me with secondary trauma and nightmares for weeks. But for the data workers themselves, their trauma manifests in two primary ways: direct trauma from the job itself and systemic issues that compound the trauma.

    1. Direct Trauma 

    Every day, content moderators are forced to confront the darkest corners of humanity. They wade through a toxic swamp of violence, hate speech, sexual abuse and graphic imagery. 

    This constant exposure to disturbing content takes a toll. “It goes beyond what makes people human,” Kings told me. “It’s like being forced to drink poison every day, knowing it’s killing you, but you can’t stop because it’s your job.” The images and videos linger after work, haunting their dreams and infiltrating their personal lives.

    Many moderators report symptoms of post-traumatic stress and vicarious trauma: nightmares, flashbacks and severe anxiety are common. Some develop a deep-seated mistrust of the world around them, forever changed by the constant exposure to human cruelty. As one worker told me, “I came into this job believing in the basic goodness of people. Now, I’m not sure I believe in anything anymore. If people can do this, then what’s there to believe?”

    When the shift ends, trauma follows these workers home. For Kings and Okinyi, like so many others, their relationships crumbled under the weight of what they saw but could not speak of. Children grow up with emotionally distant parents, partners become estranged, and the worker is left isolated in their pain.

    Many moderators report a fundamental shift in their worldview. They become hypervigilant, seeing potential threats everywhere. Okinyi mentioned how one of his former colleagues had to move from the city to the less crowded countryside due to paranoia over potential outbursts of violence. In a zine she created for the Data Workers Inquiry about Sama’s female content moderators, one of Ranta’s interviewees spoke of how the job made her constantly question her worth and ability to mother her children. 

    2. Systemic Issues

    Beyond the immediate trauma of the content itself, moderators face a barrage of systemic issues that exacerbate their suffering:

    • Job Insecurity: Many moderators, especially those in precarious living situations like refugees or economic migrants, live in constant fear of losing their jobs. This fear often prevents them from speaking out about their working conditions or seeking help. Companies often exploit this vulnerability.
    • Lack of Mental Health Support: While companies tout their wellness programs, the reality falls far short. As Kings experienced, the counseling provided is often inadequate, with therapists ill-equipped to handle the unique trauma of content moderation. Sessions are often brief and fail to address more underlying, deep-seated trauma.
    • Unrealistic Performance Metrics: Moderators often must review hundreds of pieces of content per hour. This relentless pace leaves no time to process the disturbing material they’ve seen, forcing them to bottle up their emotions. The focus on quantity over quality not only affects the accuracy of moderation but also exacerbates the psychological toll of the work. As Abrha told me: “Imagine being expected to watch a video of someone being killed, and then immediately move on to the next post. There’s no time to breathe, let alone process what we’ve seen.”
    • Constant Surveillance: As if the content itself wasn’t stressful enough, moderators are constantly monitored. Practically every decision and essentially every second of their shift is scrutinized, adding another layer of pressure to an already overwhelming job. This surveillance extends to bathroom breaks, idle time between tasks and even facial expressions while reviewing content. Supervisors monitor workers through computer tracking software, cameras, and in some cases, physical observation. They tend to pay attention to facial expressions to gauge workers’ reactions and ensure they maintain a level of detachment or “professionalism” while reviewing disturbing content. As a result, workers told me they felt like they couldn’t even react naturally to the disturbing content they were viewing. Workers were given an hour of break time daily for all their extraneous needs — eating, stretching, the bathroom — any additional time engaged in those or other non-work activities would be scrutinized and time would be added to their shifts. Abrha also mentioned that workers had to put their phones in lockers, further isolating them and limiting their ability to communicate with the outside world during their shifts.

    “The exploitation of data workers is not a bug, it’s a systemic feature of the industry.”

    And the ripples extend beyond the family: Friends drift away, unable to relate to the moderator’s new, darker perspective on life; social interactions become strained, as workers struggle to engage in “normal” conversations after spending their days immersed in the worst of human behavior.

    In essence, the trauma of content moderation reshapes entire family dynamics and social networks, creating a cycle of isolation and suffering that extends far beyond the individual.

    Traumatizing Humans To Create “Intelligent” Systems

    Perhaps the cruelest irony is that we’re traumatizing people to create the illusion of machine intelligence. The trauma inflicted on human moderators is justified by the promise of future AI systems that will not require human intervention. Yet, their development requires more human labor and often the sacrifice of workers’ mental health.

    Moreover, the focus on AI development often diverts resources and attention from improving conditions for human workers. Companies invest billions in machine learning algorithms while neglecting the basic mental health needs of their human moderators.

    The AI illusion distances users from the reality of content moderation, much like factory farming distances us from the treatment of egg-laying chickens. This collective willful ignorance allows exploitation to continue unchecked. The AI narrative is a smokescreen that obscures a deeply unethical labor practice that trades human well-being for a facade of technological progress.

    Digital Workers Of The World Rise!

    In the face of exploitation and trauma, data workers have not been passive. Across the globe, workers have attempted to unionize, but their efforts have often been hindered by various actors. In Kenya, workers formed the African Content Moderators Union, an ambitious effort to unite workers from different African countries.

    Mathenge, who is also part of the union’s leadership, told me he believes he was dismissed from his role as a team lead due to his union activities. This retaliation sent a chilling message to other workers who were considering organizing.

    The struggle for workers’ rights recently gained significant legal traction. On Sept. 20, a Kenyan court ruled that Meta could be sued there for dismissing dozens of content moderators by its contractor, Sama. The court upheld earlier rulings that Meta could face trial over these dismissals and could be sued in Kenya over alleged poor working conditions. 

    The latest ruling has potentially far-reaching implications for how the tech giant works with its content moderators globally. It also marks a significant step forward in the ongoing battle for fair treatment and recognition of data workers’ rights.

    The obstacles continue beyond the company level. Organizations employ union-busting tactics, often firing workers who agitate for unionization, Mathenge said. During conversations with workers, journalists and civil society officials in the Kenyan digital labor space, whispers of senior government officials demanding bribes to formally register the union emerged, adding another layer of complexity to the unionization process.

    Perhaps most bizarrely, according to an official from the youth-led civic organization Siasa Place, when workers in Kenya attempted to form their own union, they were instead told to join the postal and telecommunication union, a suggestion that ignores the vast differences between these industries and the unique challenges faced by today’s data workers.

    Despite these setbacks, workers have continued to find innovative ways to organize and advocate for their rights. Okinyi, together with Mathenge and Kings formed the Techworker Community Africa, a non-governmental organization focused on lobbying against harmful tech practices like labor exploitation.

    Other organizations have also stepped up to help the workers, like Siasa Place, and digital rights lawyers like Mercy Mutemi have petitioned the Kenyan parliament to investigate the working conditions at AI firms.

    A Path To Ethical AI & Fair Labor Practices

    Industry-wide Mental Health Protocols

    We need a comprehensive, industry-wide approach to mental health support. Based on my research and conversations with workers, I propose a multi-faceted approach not offered by existing support systems.

    Many existing company programs are often superficial “wellness programs” that fail to address the deep-seated trauma experienced by data workers. These may include occasional group sessions or access to general counseling services, but they are typically insufficient and not tailored.

    My proposed approach includes mandatory, regular counseling sessions with therapists trained specifically in trauma related to data work. Additionally, companies should implement regular mental health check-ins, provide access to 24/7 crisis support, and offer long-term therapy services, which are largely absent in current setups.

    Crucially, these services must be culturally competent, recognizing the diverse backgrounds of data workers globally. This is a significant departure from the current one-size-fits-all approach that often fails to consider the cultural contexts of workers in places like Nairobi, Manila or Bogotá. The proposed system would offer support in workers’ native languages and be sensitive to cultural nuances surrounding mental health — aspects sorely lacking in many existing programs.

    “Companies invest billions in machine learning algorithms while neglecting the basic mental health needs of their human moderators.”

    Moreover, unlike the current system where mental health support often ends with employment, this new approach would extend support beyond the tenure of the job, acknowledging the long-lasting impacts of this work. This comprehensive, long-term and culturally-sensitive approach represents a fundamental shift from the current tokenistic and often ineffective mental health support offered to data workers.

    “Trauma Cap” Implementation

    Just as we have radiation exposure limits for nuclear workers, we need trauma exposure limits for data workers. This “trauma cap” would set strict limits on the amount and type of disturbing content a worker can be exposed to within a given timeframe.

    Implementation could involve rotating workers between high-impact and low-impact content, mandatory breaks after exposure to particularly traumatic material, limits on consecutive days working with disturbing content and the allocation of annual “trauma leave” for mental health recovery.

    We need a system that tracks not just the quantity of content reviewed, but one that accounts for emotional impact. For example, a video of extreme violence should count more toward a worker’s cap than a spam post.

    Independent Oversight Body

    Self-regulation by tech companies has proven insufficient; it’s essentially entrusting a jackal with the chicken coop. We need an independent body with the power to audit, enforce standards and impose penalties when necessary.

    This oversight body should consist of ethicists, former data workers, mental health professionals and human rights experts. It should have the authority to conduct unannounced inspections of data work facilities, set and enforce industry-wide standards for working conditions and mental health support, and provide a safe channel for workers to report violations without fear of retaliation. Crucially, any oversight body must include the voices of current and former data workers who truly understand the challenges of such work.

    The Role Of Consumers & The Public In Demanding Change

    While industry reforms and regulatory oversight are crucial, the power of public pressure cannot be overstated. As consumers of digital content and participants in online spaces, we all have a role to play in demanding more ethical practices. This involves informed consumption, educating ourselves about the human cost behind content moderation.

    Before sharing content, especially potentially disturbing material, we should consider the moderator who might have to review it. This awareness might influence our decisions about what we post or share. We must demand transparency from tech companies about their content moderation practices.

    We can use companies’ own platforms to hold them accountable by publicly asking questions about worker conditions and mental health support. We should support companies that prioritize ethical labor practices and consider boycotting those that don’t.

    Moreover, as AI tools become increasingly prevalent in our digital landscape, we must also educate ourselves about the hidden costs behind these seemingly miraculous technologies. Tools like ChatGPT and DALL-E are the product of immense human labor and ethical compromises.

    These AI systems are built on the backs of countless invisible individuals: content moderators exposed to traumatic material, data labelers working long hours for low wages and artists whose creative works have been exploited without consent or compensation. In addition to the staggering human cost, the environmental toll of these technologies is alarming and often overlooked.

    From the massive energy consumption of data centers to the mountains of electronic waste generated, the ecological footprint of AI is a critical issue that demands our immediate attention and action. By understanding these realities, we can make more informed choices about the AI tools we use and advocate for fair compensation and recognition of the human labor that makes them possible.

    Political action is equally important. We need to advocate for legislation that protects data workers, urge our political representatives to regulate the tech industry, and support political candidates who prioritize digital ethics and fair labor practices.

    It’s crucial to spread awareness about the realities of data work through use of our platforms so that we can inform people about the stories of people like Abrha, Kings, and Ranta and encourage discussions about the ethical implications of our digital consumption.

    We can follow and support organizations like the African Content Moderators Union and NGOs focused on digital labor rights and amplify the voices of data workers speaking out about their experiences to help bring about meaningful change.

    Most people have no idea what goes on behind their sanitized social media feeds and the AI tools they use daily. If they knew, I believe they would demand change. Public support is necessary to ensure the voices of data workers are heard.

    By implementing these solutions and harnessing the power of public demand, we can work toward a future where the digital world we enjoy doesn’t come at the cost of human dignity and mental health. It’s a challenging path, but one we must traverse if we are to create a truly ethical digital ecosystem.

    By Adio Dinika - From NoemaMag

    THỐNG KÊ TRUY CẬP
    • Đang online 5
    • Truy cập tuần 4825
    • Truy cập tháng 4825
    • Tổng truy cập 286255