Your new post is loading...
What’s trending? Two months after celebrating his 100th birthday and just short of his goal of translating the collected works of Shakespeare, renowned translator and professor Xu Yuanchong died Thursday, according to a post on Peking University’s official Weibo account. Some 530 million views of the hashtag #Xu Yuanchong-has-passed-away have been recorded by Friday afternoon. What’s the story? Born in Nanchang, East China’s Jiangxi province, in 1921, Xu began translating French and English into Chinese as a scholar in Paris. His efforts gave Chinese readers a chance to read Gustave Flaubert’s “Madame Bovary,” and Marcel Proust’s “In Search of Lost Time” in their own language. He is well known abroad for translating classical Chinese poetry, and “The Analects” of Confucius into English, with an emphasis on communicating the look and feel of the works, not just their meaning. Xu married Zhao Jun in 1959 in Beijing. They have a son, Xu Ming, who is also a translator. The elder Xu’s wife died in 2018, at age 85. In recent years, he spent most of his time translating Shakespeare, with the goal of publishing the Bard of Avon’s collected works in Chinese by the time he turned 100. There’s a documentary in the works about the project. In 2014, Xu was awarded the prestigious Aurora Borealis Prize. What are people saying online? Memories in their thousands have been shared in the comments section of Weibo hashtags mourning the celebrated translator. “Crying on the train ... I will never forget the lovely interactions between the protagonists in ‘The Reader,’ and all the classic works translated by Mr. Xu.” On Twitter, one translator said “his heavy rhyming style [of poetry translation] may have gone out of fashion, but Xu was a pioneer, and one of a kind.” One student wrote: “My favorite teacher Xu Yuanchong, I only look at your poetry translations, because you understand the meaning. Your love of Chinese literature and English shines through.” Another person familiar with Professor Xu wrote, “I remember Mr. Xu handing out his business card at a reading. It said, ‘Chinese and foreign 100-book bestseller’ and ‘[China’s] … only French and English poetry translator.’ All these people we study leave us one by one.”
United Nations language staff come from all over the globe and make up a uniquely diverse and multilingual community. What unites them is the pursuit of excellence in their respective areas, the excitement of being at the forefront of international affairs and the desire to contribute to the realization of the purposes of the United Nations, as outlined in the Charter, by facilitating communication and decision-making. United Nations language staff in numbers The United Nations is one of the world's largest employers of language professionals. Several hundred such staff work for the Department for General Assembly and Conference Management in New York, Geneva, Vienna and Nairobi, or at the United Nations regional commissions in Addis Ababa, Bangkok, Beirut, Geneva and Santiago. Learn more at Meet our language staff. What do we mean by “language professionals”? At the United Nations, the term “language professional” covers a wide range of specialists, such as interpreters, translators, editors, verbatim reporters, terminologists, reference assistants and copy preparers/proofreaders/production editors. Learn more at Careers. What do we mean by “main language”? At the United Nations, “main language” generally refers to the language of an individual's higher education. For linguists outside the Organization, on the other hand, “main language” is usually taken to mean the “target language” into which an individual works. How are language professionals recruited? The main recruitment path for United Nations language professionals is through competitive examinations for language positions, whereby successful examinees are placed on rosters for recruitment and are hired as and when job vacancies arise. Language professionals from all regions, who meet the eligibility requirements, are encouraged to apply. Candidates are judged solely on their academic and other qualifications and on their performance in the examination. Nationality/citizenship is not a consideration. Learn more at Recruitment. What kind of background do United Nations language professionals need? Our recruits do not all have a background in languages. Some have a background in other fields, including journalism, law, economics and even engineering or medicine. These are of great benefit to the United Nations, which deals with a large variety of subjects. Why does the Department have an outreach programme? Finding the right profile of candidate for United Nations language positions is challenging, especially for certain language combinations. The United Nations is not the only international organization looking for skilled language professionals, and it deals with a wide variety of subjects, often politically sensitive. Its language staff must meet high quality and productivity standards. This is why the Department has had an outreach programme focusing on collaboration with universities since 2007. The Department hopes to build on existing partnerships, forge new partnerships, and attract the qualified staff it needs to continue providing high-quality conference services at the United Nations. Learn more at Outreach. #metaglossia_mundus
"Google's preeminence as an internet search engine is an illegal monopoly propped up by more than US$20 billion spent each year by the tech giant to lock out competition, U.S. Justice Department lawyers argued at the closings of a high-stakes antitrust lawsuit. Published May 3, 2024 8:27 p.m. WAT WASHINGTON - Google's preeminence as an internet search engine is an illegal monopoly propped up by more than US$20 billion spent each year by the tech giant to lock out competition, U.S. Justice Department lawyers argued at the closings of a high-stakes antitrust lawsuit. Google, on the other hand, maintains that its ubiquity flows from its excellence, and its ability to deliver results customers are looking for. The U.S. government, a coalition of states and Google all made their closing arguments Friday in the 10-week lawsuit to U.S. District Judge Amit Mehta, who must now decide whether Google broke the law in maintaining a monopoly status as a search engine. Much of the case, the biggest antitrust trial in more than two decades, has revolved around how much Google derives its strength from contracts it has in place with companies like Apple to make Google the default search engine preloaded on cellphones and computers. At trial, evidence showed that Google spends more than US$20 billion a year on such contracts. Justice Department lawyers have said the huge sum is indicative of how important it is for Google to make itself the default search engine and block competitors from getting a foothold. Google responds that customers could easily click away to other search engines if they wanted, but that consumers invariably prefer Google. Companies like Apple testified at trial that they partner with Google because they consider its search engine to be superior. Google also argues that the government defines the search engine market too narrowly. While it does hold a dominant position over other general search engines like Bing and Yahoo, Google says it faces much more intense competition when consumers make targeted searches. For instance, the tech giant says shoppers may be more likely to search for products on Amazon than Google, vacation planners may run their searches on AirBnB, and hungry diners may be more likely to search for a restaurant on Yelp. And Google has said that social media companies like Facebook and TikTok also present fierce competition. During Friday's arguments, Mehta questioned whether some of those other companies are really in the same market. He said social media companies can generate ad revenue by trying to present ads that seem to match a consumer's interest. But he said Google has the ability to place ads in front of consumers in direct response to queries they submit. “It's only Google where we can see that directly declared intent,” Mehta said. Google's lawyer, John Schmidtlein, responded that social media companies “have lots and lots of information about your interests that I would say is just as powerful.” The company has also argued that its market strength is tenuous as the internet continually remakes itself. Earlier in the trial, it noted that many experts once considered it irrefutable that Yahoo would always be dominant in search. Today, it said that younger tech consumers sometimes think of Google as “Grandpa Google.” While Google's search services are free to consumers, the company generates revenue from searches by selling ads that accompany a user's search results. Justice Department attorney David Dahlquist said during Friday's arguments that Google was able to increase its ad revenue through growth in the number of queries submitted until about 2015 when query growth slowed and they needed to make more money on each search. The government argues that Google's search engine monopoly allows it to charge artificially higher prices to advertisers, which eventually carry over to consumers. “Price increases should be bounded by competition,” Dahlquist said. “It should be the market deciding what the price increases are.” Dahlquist said internal Google documents show that the company, unencumbered by any real competition, began tweaking its ad algorithms to sometimes provide worse search ad results to users if it would increase revenue. Google's lawyer, Schmidtlein, said the record shows that its search ads have become more effective and more helpful to consumers over time, increasing from a 10% click rate to 30%. Mehta has not yet said when he will rule, though there is an expectation that it may take several months. If he finds that Google violated the law, he would then schedule a “remedies” phase of the trial to determine what should be done to bolster competition in the search-engine market. The government has not yet said what kind of remedy it would seek." #metaglossia_mundus: https://www.ctvnews.ca/sci-tech/google-s-search-engine-is-an-illegal-monopoly-u-s-justice-department-says-1.6872751
"Por primera vez, la Medalla Goethe, la máxima distinción de la política cultural exterior de Alemania, recae en una mujer mexicana: la traductora literaria e intérprete Claudia Cabrera Luna. Itzel Zúñiga 30/04/202430 de abril de 2024 Por primera vez, la Medalla Goethe, la máxima distinción de la política cultural exterior de Alemania, recae en una mujer mexicana: la traductora literaria e intérprete Claudia Cabrera Luna. Sin proponérselo como profesión, desde 1994 Claudia Cabrera empezó a volcar al español de México, su país natal, pequeños textos en alemán mientras trabajaba en el Goethe-Institut de México. Después siguió el catálogo de una exposición, de alguna retrospectiva fílmica y luego una obra de teatro. Así surgió su labor de vida, reconocida este 24 de abril con la Medalla Goethe, que la República Federal de Alemania ha concedido a partir de 1955 a 380 personalidades extranjeras por sus contribuciones al arte, la ciencia y la cultura. "Claudia Cabrera es una de las mejores traductoras literarias y de teatro del idioma alemán en México. Desde 1994 ha traducido al español mexicano más de 60 novelas, obras de teatro y libros de no ficción, entre ellas obras de Rainer Werner Fassbinder, Julia Franck, Cornelia Funke, Franz Kafka, Heiner Müller, Robert Musil, Silke Scheuermann y Anna Seghers”, indicó el jurado. A lo largo de 30 años, por sus manos han pasado un sinfín de obras de autores y especialistas de lengua germana, como "El Hacha de Wandsbek”, de Arnold Zweig, cuya traducción le valió en 2020 el Premio Bellas Artes de Traducción Literaria Margarita Michelena, del gobierno mexicano. "Con este impresionante desempeño, Claudia Cabrera contribuye significativamente a la notoriedad y popularidad de la literatura alemana, así como de sus autores y autoras, en México y América Latina”, se afirmó en el fallo, emitido en Múnich. La ceremonia de entrega será encabezada por Carola Lentz, presidenta del Goethe-Instituta nivel mundial. Tendrá lugar en Weimar el 28 de agosto, justo al cumplirse 275 años del natalicio de Johann Wolfgang von Goethe, autor emblemático de las letras germanas. Las tres galardonadas de 2024 son la historiadora y gestora cultural macedonia Iskra Geshoska; la chilena Carmen Romero Quero, directora del Festival Internacional Teatro a Mil, y Cabrera Luna, también fundadora y presidenta de la Asociación Mexicana de Traductores Literarios (Ametli), quien habla con DW del premio y los retos de su profesión. DW: Como la primera mujer mexicana en obtenerla ¿qué representa esta condecoración para usted? Claudia Cabrera: Todavía no alcanzo a salir de mi estupor de que me hayan concedido la Medalla Goethe. El primer mexicano en recibirla en 1995 fue José María Pérez Gay, el germanista más importante de México. Y que yo, una traductora literaria autodidacta, sea objeto de ese honor me parece increíble. Es la confirmación de una vida dedicada al servicio de mis dos lenguas, español y alemán. ¿Considera entonces que es también un premio a su gremio? Me parece significativo y no casual que la presea que se otorga no sólo por la difusión de la lengua y la cultura alemanas, sino también por el fomento a la cooperación cultural internacional, haya ido a parar varias veces a manos de traductores, entre ellos, al español Miguel Sáenz. Es un reconocimiento de gran calado a la labor imprescindible de los traductores literarios. Sin nosotros, las diferentes lenguas y culturas serían islas, sin traductores no habría intercambio de conocimiento en otras lenguas. Me gusta mucho citar a José Saramago, quien dijo que "si los autores hacen la literatura nacional, los traductores hacen la literatura universal”. El alemán y el español son tan diferentes lingüística y culturalmente hablando. ¿Qué tan compleja es su labor? Si bien vistos desde afuera el español y el alemán -o México y Alemania- pueden parecer totalmente disímiles, he tenido la fortuna de vivir desde niña en ambos mundos. Y al hacerlo aprendí que, pese a sus aparentes diferencias, no son tan contradictorios. Esta vida bilingüe y bicultural -que le debo al Colegio Alemán-, me ha facilitado la labor de mediar y comunicar entre ambas lenguas, de hallar formas de explicar y acercar hechos y conceptos que, en principio, podrían resultar ajenos al público mexicano. En mi más reciente traducción de "La séptima Cruz”, de Anna Seghers, novela que sucede en la década de 1930 en pleno Tercer Reich, propuse a mis editores añadir un glosario de términos, hechos históricos o personajes que un lector mexicano del siglo XXI no tiene por qué conocer, pero necesarios para comprender la trama y el contexto histórico y político de la obra. ¿Qué cualidades debe tener un traductor? Cuando no existe la traducción exacta de una palabra o concepto, cosa muy frecuente, hay que buscar equivalentes, aproximaciones o paráfrasis, inventar metáforas nuevas y adecuadas e incluso acuñar términos nuevos. En esto consiste el encanto y el reto de la traducción literaria: en transportar y hacer inteligibles no sólo palabras, sino universos enteros. Debes ser un lector ávido, conocer bien tu propia lengua y entender los intríngulis más sutiles de la lengua de la cual estás traduciendo. Tener una curiosidad insaciable, porque la traducción implica muchísima investigación pues no se puede traducir un tema que no conoces ni entiendes. ¿Cuáles considera que son los retos de su gremio? Quiero dedicar el premio a todos los traductores literarios de mi país porque quiero pensar que nos dará visibilidad y llamará la atención sobre la importancia de la traducción literaria, sobre la calidad autoral de los traductores, quienes, según la Ley Federal del Derecho de Autor (de México), somos autores de obra derivada, no de obra primigenia, pero lo somos y como tales deberíamos cobrar regalías. Nuestra lucha también es para que nos reconozcan los derechos morales, es decir, que aparezca nuestro crédito en la portada junto al nombre del autor de la obra, en la página legal o que los traductores tengamos la última palabra en cuanto a las correcciones de nuestros textos, no la editorial, pues en ocasiones hay términos o juegos de palabras que no se están entendiendo; al cambiarlos se modifica el sentido de lo escrito por el autor original en su lengua materna. A esta minusvaloración e invisibilidad también se suman la imposibilidad de vivir sólo de la traducción literaria y la precarización de los derechos laborales. Son problemáticas frecuentes en muchos países, por ejemplo, en América Latina. ¿Cree usted que la inteligencia artificial amenaza la labor de los traductores? La inteligencia artificial o las inteligencias artificiales ni son inteligentes ni son artificiales porque se nutren de un compendio de la sabiduría humana. Creo que primero afectarán a otras especialidades de traducción como las más técnicas, mas no a la traducción literaria por su alto grado de complejidad y de sutileza. Las inteligencias artificiales tardarán un tiempo en producir y traducir buena literatura, pues si no entiendes el contexto histórico, cultural y lingüístico, no es posible traducir adecuadamente. Esa capacidad, que es humana, no la tienen las máquinas. Por último, ¿qué obra traduce actualmente? Me conmueve y enorgullece enormemente recibir el mismo premio concedido en 2003 a la periodista checo-alemana Lenka Reinerová, quien vivió en México durante los años 40 del siglo pasado, huyendo del terror nazi. Es una de las autoras, junto con Anna Seghers, Alice Rühle-Gerstel y Steffie Spira, a las que estoy traduciendo como parte de mi proyecto de rescate de las escritoras germanoparlantes exiliadas en este país durante el Tercer Reich. Quise recuperar sus historias porque, a diferencia del exilio español, muy presente aquí en México, del exilio alemán no queda mucho. Aunque fue muy relevante para la sociedad mexicana de la época, es una memoria artística, cultural e histórica que se ha ido diluyendo porque al terminar la Segunda Guerra Mundial muchos alemanes regresaron a Alemania o porque en el país poca gente hablaba alemán. Estas autoras son desconocidas en México, por eso quise integrarlas al canon literario mexicano. Me propuse traducirlas al español mexicano, con un lenguaje contemporáneo, porque "Tránsito”, "La séptima cruz” y "La excursión de las niñas muertas”, de Seghers, se tradujeron en España, pero no en México, donde se escribieron. Por otra parte, ahora que están resurgiendo los fascismos, tanto en Alemania como en otros países europeos o latinoamericanos, hay que recordar lo que hizo el fascismo en los años 30 o 40." #metaglossia_mundus
"In this talk, David Yang will discuss his experience of translating Nishioka Kyōdai’s manga adaptation of Franz Kafka’s stories into English. In particular, he reflects on the medium specificity of manga and the challenges it poses to translation, as well as his hybrid translation process, which relied on the Japanese text as well as Kafka’s German original. These different layers of mediation allow readers to experience Kafka’s masterpieces in fresh new ways. - Day & Time:May 21st, 2024 (Tuesday), 14:00-15:00
- Venue:Lab (2nd floor of WIHL)
- Language:English
- Participation:Free
- Participants:Students, Faculty and Public
- Presented by the Yanai Initiative for Globalizing Japanese Humanities, with support from the Waseda International House of Literature
Flyer Lecture David Yang
David Yang is a doctoral candidate in the Department of Asian Languages and Cultures at UCLA and a Yanai Initiative Research Fellow at Waseda University. Informed by translation studies, his research examines the conceptualization of endings in modern Japanese literature. His translation of Nishioka Kyōdai’s Kafka: a Manga Adaptation was published by Pushkin Press in 2023." #metaglossia_mundus
"Gordon Runyan, Religion columnist | May 01, 2024 “Pastor, how can you believe in the Bible when it’s been translated so many times?” That’s a common objection. Meaning no insult, but it’s an argument based on ignorance. The objector doesn’t know anything about how the Bible came to be and assumes the worst: a shadowy history littered with corruptions both accidental and nefarious. It’s assumed that we got the Scriptures through a process much like the old party game, “Phone Message.” In that game, you line up several children in a row. A long sentence is told to the first child. He takes off running, around a cone, and then comes back and whispers the sentence he was told to the next kid. When you get to the last child, there’s the punchline. You have the last one recite what he thinks he heard, and everybody gets a good laugh at how mangled the message has become, compared to what the first child was actually told. People think we got the Bible that way. Or, they assume the process is something like making old Xerox copies, and then making copies of the copies, until the document is ugly and hard to read. But if I write a book in English and then three friends translate my book into their native languages (say French, Spanish, and German) that has no effect on the book I wrote. None at all. It still says what it says when I typed the words, “The End.” Now, one of the friends may have made some translational errors. Maybe he’s not so great at English after all. OK, that’s not hard to spot, because eventually we’ll run across someone who knows both languages and can highlight the errors. So what should be done? We just make a new translation, hoping to improve on previous efforts. We still have the original book for making comparisons. In that process, no matter how many times it’s repeated, it doesn’t change what was originally written. The same is true for making hand-written copies: as long as we have the source document, we can spot inaccuracies and correct our work. There are in existence right now over 5,000 ancient manuscripts of Scripture and lots of ancient translations into other languages. In some cases, the source documents we’re working with date to several hundred years B.C. We have copies from many time periods and geographical regions. We have enough to say with certainty that the Phone Message game is not what happened here. How do we know? Because the older copies say the same thing, overwhelmingly, as all the later copies. The medieval era Latin carries the same message as the Hebrew and Greek from 300 A.D. If somebody made huge errors along the way, or the Council of Nicea (for instance) had forced a bunch of wholesale changes, that would be very easy to spot. We have all the receipts, as the kids say. We can spot all the issues, like missing words, misspellings, and transposed numbers. There are modern English translations that will even show you where all those places are. Nobody’s hiding any of this. There’s no need. You have the Bible God intended. Gordan Runyan is pastor of Tucumcari’s Immanuel Baptist Church and author of “Radical Moses: The Amazing Civil Freedom Built into Ancient Israel.” Contact him at: reformnm@yahoo.com" #metaglossia_mundus
Non-verbal communication such as gestures, facial expressions and body movements can convey a wealth of information beyond words, writes guest columnist Karen Friedman. #metaglossia_mundus
"Tomedes Launches New ‘Most Popular Translation’ Feature in MachineTranslation.com New Feature Allows Users to View Translations Ranked by Similarity Across Online Translation Tools April 30, 2024 14:59 ET| Source: Tomedes Oregon, Beaverton, April 30, 2024 (GLOBE NEWSWIRE) -- MachineTranslation.com by Tomedes is excited to announce a new feature called "Most Popular Translation." This innovative functionality provides a straightforward way for users to view and compare translations from various sources, including different machine translation engines, online translation tools, and generative AI models. The "Most Popular Translation" feature calculates a similarity score that reflects the consensus among diverse translation systems about a particular translation. This score is crucial for users who seek reliable and precise translations, as it highlights translations with the highest degree of agreement and sameness across various platforms, suggesting enhanced reliability and accuracy. This AI-powered feature is designed to assist users who rely on precise translations, providing a clear, numerical measure of consensus that can guide decision-making. Feature Overview: - Comparison of Translations: Analyzes and displays how similar translations are across different engines and tools, incorporating machine translation evaluation to ensure accuracy.
- Guidance on Reliability: Translations with higher similarity scores suggest a greater consensus among engines.
- User-Friendly Interface: An intuitive button and tooltips that help users navigate the feature easily.
In addition to the "Most Popular Translation," MachineTranslation.com offers several other AI-powered features designed to enhance user experience and translation accuracy: - AI Translation Insights: Provides users with insights about word choice, consistency, and length across different translation outputs, helping to refine the translation process.
- AI Quality Score: Each translation is scored on a scale from 1 to 10, offering users a clear, quantitative measure of translation quality based on AI evaluations.
- Detailed Analysis & MTPE Assessment: Offers a written analysis of each translation's smoothness or fluency and advises whether human post-editing is necessary to improve the translation.
These features together provide a comprehensive toolset for anyone needing high-quality, nuanced translations, combining the latest in AI technology with user-friendly design. For additional details about this feature or to experience it firsthand, please visit https://www.machinetranslation.com. About MachineTranslation.com: MachineTranslation.com by Tomedes is an AI-assisted online translation tool that translates, compares, and recommends the best translations in real-time. It offers high-quality, and cost-effective translation services with detailed AI analyses and quality scores, making it ideal for SMEs. Media Contact: Rachelle Garcia Head of AI MachineTranslation.com info@machinetranslation.com Disclaimer: The information mentioned in the press release is provided by source Tomedes. KISS PR and its distribution partners are not directly or indirectly responsible for any claims made in the above statements. Contact the vendor of the product directly for any queries/issues." #metaglossia_mundus
"Abstract. Large-scale pretrained language models (LLMs), such as ChatGPT and GPT4, have shown strong abilities in multilingual translation, without being explicitly trained on parallel corpora. It is intriguing how the LLMs obtain their ability to carry out translation instructions for different languages. In this paper, we present a detailed analysis by finetuning a multilingual pretrained language model, XGLM-7.5B, to perform multilingual translation following given instructions. Firstly, we show that multilingual LLMs have stronger translation abilities than previously demonstrated. For a certain language, the translation performance depends on its similarity to English and the amount of data used in the pretraining phase. Secondly, we find that LLMs’ ability to carry out translation instructions relies on the understanding of translation instructions and the alignment among different languages. With multilingual finetuning with translation instructions, LLMs could learn to perform the translation task well even for those language pairs unseen during the instruction tuning phase." #metaglossia_mundus
"“Translators and writers must fight through the “labyrinth of imagination,” find their way through their private language toward a text’s new picture of reality.” "Translators and writers must fight through the “labyrinth of [the] imagination,” find their way through their private language toward a text’s new picture of reality." “The responsibility of translator,” writes Olga Tokarczuk, “is equal to that of writer.” 1 Both connect “intimate language”—through which individuals understand their experience—with “collective language”: the shared vocabulary through which a society forms its “picture of reality.” Ideally, a writer refreshes stale collective language by offering new articulations of experience, and a translator shares different societies’ languages to reveal that there is no single way to interpret the world. In this sense, writing and translating stitch together individual voices into new collectives, forming new totalizing conceptions of reality. So writes Tokarczuk—or, rather, so writes Tokarczuk as translated by Jennifer Croft. Croft herself imagines translators similarly in her new novel, The Extinction of Irena Rey: as upcyclers, parasites, devotees, minotaurs, conquerors, creators, invasive species, and, perhaps, human beings. But she insists most ardently that translators are the connective tissue in massive systems. Croft and Tokarczuk share a concern for individuals and the totalities they inhabit. The profound feat of Tokarczuk’s historical epic The Books of Jacob is its new collective language, which figures the massive web of reality through the textured frailty of human beings. Croft, in contrast, attends to the way the intimate language of the individual sustains whole pictures of reality. And, precisely in the novel’s shortcomings, Croft also shows the fickleness and fragility of individual language—and its failure to produce pictures of existential systems. In her celebrated Nobel lecture, Tokarczuk suggests that the contemporary world requires a new kind of novelistic voice. To represent the world’s massive systems without losing fidelity to individuals, spares, or strays calls for a “tender narrator”: a storyteller with a “perspective from where everything can be seen,” who illuminates “that all things that exist are mutually connected into a single whole.”2 Such a narrator, she clarifies, is not a mere accumulator of information but a mythmaker, synthesizing disparate objects into a totality the reader can experience. In Tokarczuk’s The Books of Jacob, the tender narrator is ostensibly Yente, grandmother to the titular eighteenth-century Jewish messianic leader Jacob Frank. On the first page, Yente swallows an amulet meant to forestall her death and is mysteriously transported outside her body, allowed to watch centuries of history and countless human lives unfurl. But really, Yente is a fictive conceit; one doesn’t get the sense that in Tokarczuk’s epic, intimate language comes from Yente’s lips. The true tender narrator is a phantasm of syntactic twists and dictional choices: the prose itself. Tokarczuk’s great achievement is the creation of a new language for grasping the world, a narrative voice that conjures and binds individual people in their hopes, agonies, desperations. The novel’s massive web of characters, immersed in the tumult of plague and war and intolerance, lament, each in their own way, that the world is “made poorly.” But the novel itself binds them together in the enormity and minutia of their thirst for salvation. Consider, for instance, Jacob’s encounter with the Black Madonna of Częstochowa, among the most renowned Christian icons in Poland. The scene’s details are narrated in the present tense, flinging us into the middle: “Jacob is permitted to enter the crowd in front of the picture. He is scared, but not of the picture—of the crowd.” Then, rupture: Something strange freezes in the air, so that your heart contracts as if from fear, but it isn’t fear, it’s something bigger, and it happens to Jacob, too, so that he falls on his face, onto the floor that was only just stomped all over by the peasants’ dirty shoes, and here, next to the floor, the racket quiets, and it’s easier to bear the tightness in his chest that out of nowhere folded him in half. Later, in his chambers, Jacob deliriously prophesies that the Virgin is a guise of the Shekinah (a Kabbalistic divine feminine figure) and a key to salvation: “She has to hide in the abyss … But every day she will appear to us more clearly, down to her every detail.” The perspective gently shifts to Jacob’s attendant, who leaves the chamber and later spots a strange hierophantic inscription on the wall. “He looks at it, his surprise not wearing off, then shrugs and blows the candle out.” And then we’re gone, on to another chapter. Episodes like this accumulate over the novel’s 900-odd pages, a complete image forming from disparate pieces. Patterns emerge: the inexplicable in the ordinary, suffering that defies explanation, salvation that appears this close. We discern an underlying fabric, a narrative logic tuned to existential need. This logic most clearly appears when Yente, from her mystical vantage, gazes upon the “messianic machine,” the metaphysical infrastructure of reality. It spins “slowly and systematically,” working pedestrian life into salvation, and its product is the Messiah itself. This Messiah, like the narrative voice that permeates the novel, “is something that flows in your blood, resides in your breath,” and is found in the “dearest and most precious human thought: that salvation exists.” The novel’s own messianic machine is built of this fragile particularity: fugitive note-taking and rambling letters, webs of backdoor diplomacy and gossipy fortune telling and the delirious look by which the faithful realize the Spirit of God has entered Jacob. In them, we find a world that is, in Walter Benjamin’s words, “shot through with chips of messianic time.”3 With its whirring salvific hydraulics, The Books of Jacob’s narration creates a new picture of totality. And it does so simply—impossibly—by holding together moments of human frailty. Narrative voice—tender, sober, mystical, earthly—is Tokarczuk’s achievement. But it isn’t hers alone: after all, it is Jennifer Croft who brought forth Tokarczuk’s private messianic language in English. Croft has described translating The Books of Jacob as a multifocal process. Translation, she argues, means parsing the meaning of each word and intuiting the cultural architecture to which words belong—the exact kind of massive system underpinning the novel itself. This means a text must “pass through the vast, dynamic labyrinth of the translator’s imagination.”4 Yet the result is necessarily ambivalent: “To the extent that the map can change the territory by determining an undetermined space or feature … I have likely both narrowed and expanded Olga’s original text in my translation.” Managing every aspect of a text is fractious and uncertain—a consequence of the translator’s subjectivity, for better or worse. This uncertainty is at the heart of Croft’s new novel, The Extinction of Irena Rey. The plot centers on eight devoted translators of Polish celebrity author Irena Rey. The author assembles the translators at her forest estate for the purpose of translating her magnum opus, but then she disappears. (Rey is clearly a stand-in for Tokarczuk, though Croft assures the reader that her fictive author is “the opposite.”5) WITH ITS WHIRRING SALVIFIC HYDRAULICS, “THE BOOKS OF JACOB” CREATES A NEW PICTURE OF TOTALITY. AND IT DOES SO SIMPLY—IMPOSSIBLY—BY HOLDING TOGETHER MOMENTS OF HUMAN FRAILTY. Perhaps naturally, the novel has plenty to say about translation. Croft writes that translators are like fungi—especially the hyphae of a mycorrhizal network, the threads that “coursed through the soil and stitched the plants and trees of the forest into a united and communicating whole.” They are nexuses in which all things are (or seem) connected. The novel is fascinated with such sites where disparate things come together: from the primeval Białowieża Forest to Berlin Tempelhof Airport, from a writer’s house to Instagram, the protagonists of Irena Rey continually encounter places where “everything was connected to everything else by means of a word.” But whereas Books of Jacob’s narration illuminates the messianic mechanics of such all-inclusive systems, Irena Rey fixates on their fragility: asking how individuals prevent greater wholes from forming. Although the book is named after the authoritarian and enigmatic Irena, the story is really that of Spanish translator Emi, the novel’s narrator. Among all the translators, she is Irena’s most zealous devotee; Emi cringes at the possibility of misrepresenting Irena’s language, lashes out at the other translators for doubting the author, and is convinced that the new novel will save the world from climatic extinction. Emi is less sure about her own work. She’s entranced by the notion that translators, like fungi, “stitch the world into a united and communicating whole,” but she worries: If fungi are “translators of trees,” are they “unwaveringly faithful” or simply parasites destroying their hosts? Translation has a captivating power to connect the individual to the collective, but in doing so is vulnerable to the power of individual subjectivity. This tension haunts the novel’s narrative method. Irena Rey professes to be the English translation of Amadou, a novel that Emi originally wrote in Polish (despite being a native Spanish speaker). The text is peppered with footnotes by the ostensible English translator, Alexis Archer—herself a character in the novel, Emi’s nemesis. Archer’s notes frequently explain the complexities of translating from Polish to English while reading Spanish between the lines; she often finds no direct translation for Emi’s writing, so she makes creative alterations, becoming an author unto herself. But Archer also engages with the narrative, disparaging Emi’s storytelling and outright disputing her version of events. Though Emi proposes to capture the whole story of Irena’s disappearance, Archer makes the text messy: the narrator isn’t a voice from beyond but a collision of private and public languages. But of course, it’s all fictional. Alexis isn’t real and Irena Rey (presumably) isn’t a translation: the multiplicity of voices is just one voice, Croft’s. The point isn’t so much the postmodern truth-telling reverie but the question of how many voices it takes to create a picture of reality with language. Maybe there are multiple voices (it feels sexy to say so)—but maybe there’s one synthesizing voice in the end. Messy individuality is also part of Irena Rey’s failings. While the text boasts an array of quirky characters, it is short on compelling people. Outside of Emi, we’re most acquainted with the too-beautiful, too-online Alexis; Freddie, the philandering pseudointellectual Swedish translator; and Chloe, the stolid French translator (four more translators flit in and out of view, plus a bevy of side characters, but their personalities are ill-defined). Even Alexis, Chloe, and Freddie remain woefully undeveloped because they’re filtered through Emi, for whom they are, respectively, objects only of petulant spite, sophomoric possessiveness, and adolescent infatuation. In a sense, all these problems are by design. Emi represents the extreme of a translator’s worst impulse: fixation. She throws herself into either absolutist devotion or hatred, too attached to the object of her desire to fit into the networks around her. But although her fixation makes for sharp commentary, it also makes for poor reading. Her obsession is repetitive, not generative; we hear over and over that Irena is immaculate, Freddie is alluring, and Alexis is vapid—but little more than that. This does an especial disservice to Alexis, perhaps the most interesting character in the novel. Although she’s vain, she’s also the boldest, most original theorist of translation in the group, but anytime she begins to voice her thoughts, Emi’s narration cuts her off with nonspecific hatred. Emi’s obsession also dampens the novel’s central mystery. Although Irena’s cryptic disappearance prompts reflections on the nature of translation itself, Emi is so narrow minded that every new revelation appears as over-the-top shock. Often, this means contrived rhetorical questions: “If we knew more than what we strictly needed to know, would it make our translations better? Or would it make them worse?” “Was all of this—everything I held sacred, understanding Irena, doing her language justice, giving her what she deserved—just a game to them?” “Were we mostly responding to the notion that we might not know her every thought, her every move, her every conscious desire, like we had always believed we did?” The novel wants to suggest that narration and translation can both forge new collective totalities from individual creativity. But these ideas fizzle because Emi, as an individual, cannot narrate them effectively. It is clear that she has an imperfect understanding of translation, has made a graven image of Irena, and has unhealthy attachments to her collaborators—but the reader knows this long before the novel seems to. The result is that her individuality disrupts the wholeness it tries to create. In her introductory note, Archer suggests that Emi is “completely unequipped to comprehend” her own story. Unfortunately, this judgment is even truer than the novel realizes. This weakness, however, might be revealing. If The Books of Jacob develops a new totalizing language from individual lives, Irena Rey demonstrates how intractable individuality can be. Translators and writers are rarely tender narrators—they are liable to bias and preference and obsession. They, too, must fight through the “labyrinth of [the] imagination,” find their way through their private language toward a text’s new picture of reality. For some translators, like Croft herself, individual language helps build a messianic machine. But for others, like Emi, bathos and obsession mean we never escape the labyrinth. - Olga Tokarczuk, “How Translators Are Saving the World,” translated from the Polish by Jennifer Croft, Korean Literature Now, June 19, 2019.
- Olga Tokarczuk, “The Tender Narrator,” translated from the Polish by Jennifer Croft and Antonia Lloyd-Jones, The Nobel Prize, December 7, 2018.
- Walter Benjamin, “Theses on the Philosophy of History,” in Illuminations, edited by Hannah Arendt, translated from the German by Harry Zohn (Schocken, 2007), p. 263.
- Jennifer Croft, “The Order of Things: Jennifer Croft on Translating Olga Tokarczuk,” Literary Hub, February 1, 2022.
- “A conversation with Jennifer Croft, author of The Extinction of Irena Rey.” Prepublication advance reading material for The Extinction of Irena Rey, from Bloomsbury. "
#metaglossia_mundus
"Mardi 7 mai 2024 de 14h30 à 16h30 Les interprètes judiciaires en breton de l’ancien régime aux années 30 Thierry Hamon, Professeur de droit Université de Rennes, ex-directeur de l’antenne de la faculté de droit de Saint-Brieuc : « Les interprètes judiciaires en langue bretonne de l’Ancien régime jusqu’aux années 30 en Basse-Bretagne ». Adhérents de l’UTL." #metaglossia_mundus
"From the pillars of advanced learning and GenAI to the best approach to training search engines, Dmitry Masyuk, Director of the Search and Advertising Technologies Business Group at Yandex, outlines the challenges faced by the industry and how Yandex is empowering its users by combining advanced learning from large models with a search engine. Integrating AI into search engines has marked a significant shift in the industry. How is Yandex tailoring its AI strategies to meet the unique demands of web users? Around a year and a half ago, when the first language models like ChatGPT were released, there was a misconception that they might challenge major search engines. However, this was a myth. Regardless of the type of language model or neural network, they lack specific knowledge about the world. While they can articulate impressively on general topics, they falter when asked about specific details or recent events. Seeing this, we recognised we were positioned uniquely. After all, we have a search engine that knows almost every detail about the world since it continuously takes in new information from the Internet. Thanks to our advanced search engine, we have real-time access to the most recently uploaded online information. At first, we were unsure about comparing ourselves with OpenAI. However, we are now confident that we can develop similarly sophisticated language models and Generative AI. Our recent release, Neuro, is a unique hybrid product that combines the power of our proprietary language model, YandexGPT, and our own search engine, which is free for our approximately 100 million monthly users. Currently operating in Russian, users can ask questions in natural language and receive detailed, up-to-date responses. If information is available online, Neuro will provide a single, summarised answer, citing all the sources. This is the biggest search-related update that Yandex has rolled out in the last 20 years. The launch of Neuro is a significant milestone for us and for the rest of the world. What challenges does Yandex face when ensuring the accuracy and reliability of AI-generated information? The fundamental challenge here is clear. In response to a user query, a traditional search engine provides 10 relevant sources on the first page. In this case, the search engine is not really responsible for the content of the sources it provides. However, it’s a whole other story when you’re giving a direct answer to a user’s question. This is why we’re seeing many emerging AI products avoid answering sensitive questions. Our aim is to maintain the universality of search products while ensuring accuracy, reliability and ethics in our responses. From the outset, our product development has focused on these principles. Technically, this involves two components. Firstly, our search engine with 27 years’ experience filters out low-quality resources. When using our hybrid search, the search engine provides material sources, which are summarised by the LLM into a single answer. Secondly, we train our model to provide balanced, ethical and accurate responses through a team of specialised editors. During the training process, multiple individuals review material to ensure alignment and balance. Plus, Neuro’s answers are based on the information found online and the answer always includes links to the sources used. In summary, the first component involves providing relevant and high-quality sources, a common practice in any search engine. The second component utilises a team of cross-checking editors to train the model to provide accurate and balanced answers. How does Yandex differentiate itself in a competitive market? We are a bit of a technological miracle as we are managing to strongly compete against Google. Approximately 65% of Internet users in Russia opt for Yandex as their search engine of choice, a well-known fact that is openly available. This showcases our ability to compete with global companies. Our size is somewhat paradoxically advantageous. While we’re significantly smaller than other giants on the global market, it allows us to be more dynamic and agile. Over the past five years, we’ve nearly doubled our staff, growing to 27,000 employees. However, we’ve managed to maintain a startup mentality, making quick decisions and staying nimble in a rapidly changing market. So, how are we differentiating ourselves from other global players? First of all, especially when entering new markets, we place a huge focus on product localisation. The first thing we did when we started actively developing in the CEE region was a massive push towards raising the quality of Yandex Search in the Kazakh language. I’m not saying that other global companies don’t care about localisation, but for us this is a number one priority. We make sure that our AI and search solutions work better in the local language, including, but not limited to, Russian. And we regularly compare thousands of cases to make sure of that. Another crucial aspect is our talent pool. Russian engineers consistently excel in programming competitions like ICPC where Russia has won 14 of the last 20 years, showcasing our country’s remarkable engineering talent. Obviously, we couldn’t build any of our cutting-edge technologies without hundreds of talented professionals and we’re constantly hiring new ones. We now have 1.5x more ML engineers than we did before 2023. What future developments can we expect from Yandex in the AI and search engine fields? We are at the beginning of a new era; a technological revolution that could last five to 10 years. Just as smartphones and the Internet transformed our lives, AI has the potential to do the same, if not more. We’re only one and a half years into this AI renaissance, but already, the potential is staggering for GenAI. Regarding Yandex’s plans, I’m personally inspired by OpenAI’s advancements and it is important for us to adhere to the new standard of AI it has set. Our strategy is generally to ensure the fundamental AI technologies are on par with global companies in specific domains, or even better. But there is a challenge that’s being widely discussed within the professional community — monetisation. The question is, how do you make a great product, which is always the first thing on your mind, and also turn it into a profitable business? Sure, there are traditional monetisation approaches like advertising and many companies rely on subscriptions. But we aim to distribute our general-purpose products, like Neuro, for free. We might adopt a monetisation approach at some point, but it’s too early to say. Speaking of advertising, we’ve been quite successful in ad tech, which we have years of experience and strong expertise in. To put things into perspective, Yandex Direct — our platform for placing contextual and banner ads — has more than 400,000 advertisers who place an average of 4.5 billion ads a day. Around 25 different neural networks are involved in delivering ad impressions to users and our entire Yandex Advertising Network has more than 55,000 partner platforms. Aside from that, Yandex has quite a few plans on the international market. I believe this year we will be marked by a significant expansion in the non-Russian-speaking world. In essence, our focus is on improving fundamental technologies, developing sustainable business models, evolving our products and expanding internationally. We are excited about the journey ahead. What is the potential of AI and what global trends are you seeing that could be key to its advancement? The concept of AI is fascinating considering its potential to revolutionise technological advancements. AI essentially makes intelligence cheaper and faster, much like how the Internet made information more accessible. The efficiency gains from AI are substantial. Over the next five to seven years, we can expect a 3% to 5% increase in daily productivity for the average Internet user. This will be particularly pronounced in fields such as software engineering, customer support and legal professions where AI can streamline tasks by up to around 10%. Contrary to concerns about job loss, AI is expected to enhance productivity and create new jobs and businesses. Software engineers, for instance, will receive substantial support from AI systems, enabling them to work more efficiently. Additionally, AI solutions will be extended beyond B2C applications. We already offer access to our models via API, allowing companies to seamlessly integrate our AI into their systems. Ultimately, AI will democratise access to information and services making them more efficient and accessible globally. Whether it’s offering medical advice to remote communities or streamlining business processes, AI will transform industries and improve lives. What are the latest developments in voice tech in regard to search and AI? Voice technology is a fascinating area often overlooked in discussions. It holds immense promise, particularly in the field of real-time translation which is not yet fully developed; however, the market is making significant progress. I’m confident that in a few years there will be technologies capable of providing real-time translation for conversational purposes. We are already working in this field and have successfully implemented real-time, AI-powered video translation in Yandex Browser. We have made significant progress in voice recognition and speech synthesis. The quality of real-time voice-to-text conversion on mobile devices is impressive, thanks to advanced AI and Machine Learning systems. While voice recognition and speech synthesis have come a long way, there is still much to achieve, particularly in understanding emotions. Smart assistants lack the ability to convey empathy effectively or read emotional content with appropriate intonation. One of our goals in the Machine Learning department is to improve the emotional recognition capabilities of our voice assistant, Alice. (Its technology was also brought in for the creation of Yasmina, a bilingual AI assistant that speaks both Arabic and English). By the end of the year, we aim to enhance its ability to efficiently interpret emotional overtones and make them more relatable. It is already remarkably human-like, capable of making jokes and offering an engaging conversational experience. It is simply enjoyable to interact with it, however, we still need to improve its emotional understanding capabilities. Overall, the trend is towards humanising technology. We’ve already introduced features like whispering, allowing the virtual assistant to respond softly when spoken to in a whisper, as well as speaking louder or quieter depending on the distance from the person speaking. Overall, the field of voice tech is definitely shifting towards a more human-centred approach. It is not only about intonations, but also other aspects of human interactions, some of which may be very subtle. However, when you deal with technology, there remains a gap that needs to be bridged. Despite this, I believe that the voice tech domain will be fully explored within the next three to five years." #metaglossia_mundus
"OTTAWA — The federal government has been forced to adjust the set-up in the House of Commons and committee rooms after another language interpreter suffered a significant hearing injury. Dylan Robertson, The Canadian PressApr 29, 2024 4:10 PM A language interpreter is seen working in an interpretation booth during a news conference in Ottawa on October 16, 2020. THE CANADIAN PRESS/Justin Tang OTTAWA — The federal government has been forced to adjust the set-up in the House of Commons and committee rooms after another language interpreter suffered a significant hearing injury. The incident occurred April 8 during a closed-door meeting of the House foreign-affairs committee. "I always do caution everyone to pay attention to that, because we have had many incidents," Liberal MP Ali Ehsassi, the committee's chair, said Monday. "I certainly hope members (of Parliament) take it more seriously. It's very disconcerting." The Canadian Association of Professional Employees says the worker has been off for the past three weeks, and the union is blaming inadequate equipment on Parliament Hill for multiple injuries in recent years. The latest incident involved the Larsen effect, which occurs when a microphone and an earpiece get too close, resulting in sharp, sudden feedback that can be loud or frequent enough to permanently injure someone. The federal Labour Program, which oversees labour standards in federally regulated workplaces, issued an order about the effect on April 25. Written in French, the order noted that that a health and safety officer visiting the Hill the previous week found exposure to the Larsen effect "constitutes a danger" for staff wearing headphones. "Repeated exposure to the Larsen effect can cause permanent damage to the hearing health of interpreters," reads the order, which calls for changes to how meeting spaces are set up to prevent it from happening again. House of Commons Speaker Greg Fergus notified MPs on Monday morning that tables in committee rooms were rearranged to keep microphones and earpieces farther apart. Stickers are now posted where MPs can place unused earpieces, along with printed instructions on how to prevent incidents.Similar information has been posted in Senate committee rooms. Fergus also reminded MPs not to touch the microphone or its stem when it's on, lean in and out from the microphone while speaking or adjust their earpiece volume when sitting near a live microphone. "The House of Commons works with the Translation Bureau to ensure the best possible working conditions for interpreters," Fergus's office wrote in a statement, noting that this includes measures "at the technological, behavioural and physical levels." A spokeswoman for the Senate's self-governing body reiterated those points, adding that they are not aware of any recent Larsen-effect incidents in a Senate workplace. "Despite efforts to minimize the frequency of these events, they continue to occur on limited occasions," the spokeswoman wrote. Experts have told Parliament that the staff who translate meetings between English and French are being put at risk of injury because they are sometimes exposed to sudden, loud noises even as they strain to hear some voices. "Despite an unacceptably high number of workplace injuries, the Translation Bureau has been slow to implement proper measures to protect their employees," the union said in a statement on Saturday. Public Services and Procurement Canada, which oversees the Translation Bureau, did not immediately provide comment. The union said the latest incident occurred during a committee meeting as MPs were drafting a report. There were two instances of sharp feedback, the union said, and Ehsassi warned MPs to respect existing protocols. But that didn't happen, the union said, and a third, very loud episode of feedback followed. The interpreter left work and later sought medical attention. Ehsassi said he doesn't recall the specifics of what happened, but he hopes the new rules will "insulate against further problems." Bloc Québécois MP Stéphane Bergeron said MPs only learned an interpreter had been injured after the meeting concluded. He noted that his party opposes hybrid sittings, in part because of interpreter injuries. "We have to do everything possible to ensure the safety of House of Commons staff," he said in French. "If it's only a matter of keeping our earpiece away from the microphone, that seems like a very modest contribution to bolster the safety of our interpreters." Conservative MP Ziad Aboultaif said he didn't recall the incident, but these injuries seem easy to prevent. "People need to use the recommended equipment, and that will solve the problem," he said. So many interpreters were placed on injury leave in 2022 that the public service hired contract workers to make up for the staff shortages. The shortage has helped constrain committee travel, since a certain number of interpreters are required to ensure MPs' meetings abroad can be conducted in both official languages. Last year, the Labour Program found Ottawa was breaking labour laws by not adequately protecting interpreters, following an October 2022 incident in which a parliamentary interpreter was sent to the hospital in an ambulance after experiencing acoustic shock during a Senate committee meeting. The union had argued the Translation Bureau was not adequately protecting employees who are working in hybrid settings, where people appearing virtually are using substandard devices in breach of committee rules. At the Senate committee in question, someone was allowed to testify without any headset. Officials have said that parliamentary interpreters can suspend their services if someone appearing virtually is not wearing a headset that appears on a list of approved devices. People have repeatedly ignored the instructions to use an approved device during parliamentary hearings and press conferences. This report by The Canadian Press was first published April 29, 2024. Dylan Robertson, The Canadian Press" #metaglossia_mundus
EP – Multi-annual work program drawn up by the European Parliament to meet its communication needs. Grants to highlight the importance of democratic decisions #metaglossia_mundus
By Chyung Eun-ju and Joel Cho "It is undoubted that, in the fast-paced market of technological innovation, a multitude of AI-driven translation tools are undergoing swift development. Recently, SK Telecom introduced TransTalker, an AI-powered translation program capable of providing real-time interpretation in 13 languages. The Samsung Galaxy S24's Live Translate feature contributes to breaking down language barriers by using large language models capable of understanding, interpreting and generating text that mimics human language across a wide range of languages and contexts. Breaking language barriers and facilitating communication through AI is capable of bringing huge benefits to many but we should not ignore the matter of the impact of LLMs. Of the roughly 7,000 languages used worldwide, a significant number are at risk of extinction, leading to a gradual decline every year. The United Nations states that an Indigenous language vanishes every two weeks. Languages like Hawaiian, Quechua and Potawatomi are among those already at a critical risk of extinction due to factors like globalization, migration and cultural homogenization. At present, roughly nine languages disappear each year. However, LLMs could significantly accelerate this rate of extinction. The proliferation of the internet, combined with years of globalization that pushed for the standardization of the English language, made it the global language for business, politics, science, sports and entertainment. So interestingly enough, even though more than half of all websites are in English, over 80 percent of people worldwide don't speak it. Language development represents one of the most crucial intellectual leaps in human history. It empowers us to generate thoughts, share them with others, think in abstract terms and construct intricate concepts about the world and its possibilities, fostering their progression across generations and geographies. Without language, much of modern civilization would be unattainable. Yet, this issue extends beyond language alone. If the majority of languages vanish within a few generations, it would cause a collapse in the diversity of thought and identity. Since language and the mind influence each other reciprocally, the loss of languages implies the loss of distinct ways of thinking and experiencing the world. Language plays a key role in structuring, organizing and processing information. The languages we use affect how we see the world, how we make memories, the choices we make, the emotions we experience and the knowledge we gather. Not only that, but language also serves as a powerful medium for cultural expression and personal identity. This notion was vividly depicted in the bilingual Korean-English film "Past Lives." The movie shows how a person is deeply linked with the language they speak, where different languages can convey different sides of an individual. Through its exploration of the untranslatable Korean term "inyeon" — a term linked with a romanticized concept of eternal love — the movie delves deep into the complexities of communication and the cultural significance of language. Mustafa Suleyman, the CEO of Microsoft AI had suggested in a TED talk to think of AI as a kind of digital species. AI isn't biological in any traditional sense but they speak in human languages, understand our visuals, process enormous volumes of data, have memory, exhibit personality, show creativity, reason to a certain extent and even make basic plans. He stated that to say AI is mainly about math or code is like saying humans are primarily about carbon and water. So, as much as AI translation tools are, in fact, facilitating communication on a massively global scale, we should not forget the anthropological value of human language and how much invaluable history the diversity of language holds for humanity. Chyung Eun-ju (ejchyung@snu.ac.kr) is studying for a master's degree in marketing at Seoul National University. Her research focuses on digital assets and the metaverse. Joel Cho (joelywcho@gmail.com) is a practicing lawyer specializing in IP and digital law." #metaglossia_mundus
"Les difficultés de la traduction sont bien connues mais elles sont décuplées lorsqu’elles concernent un document juridique (texte législatif, acte individuel, jugement ou déposition). Avec - Sylvie Monjean-Decaudin Professeure à l’UFR de Langues étrangères appliquées de Sorbonne université, directrice et fondatrice du centre de recherches en juritraductologie
- Nejmeddine Khalfallah Maître de conférence à l’université de Lorraine, spécialisé en terminologie juridique arabe.
La responsabilité qui pèse sur le traducteur est encore plus grande car le sens qu’il est chargé de transférer d’un texte à l’autre oblige plus que tout autre, qu’il est performatif et peut avoir à ce titre un impact énorme dans la vie concrète. Une telle responsabilité devient majeure à l’heure de la mondialisation et de l’intensification des échanges commerciaux et humains ; et elle est encore plus cruciale lorsque les tensions internationales montent comme aujourd’hui. Ce n’est probablement pas un hasard, si Ricœur abordait la traduction en termes de justice. Ce rôle est pourtant un peu minoré, nombre d’acteurs du droit estimant que tout terme juridique a un correspondant exact dans une autre langue et la traduction n’étant qu’une opération technique. Qu’ils se détrompent, car le droit contient tout autant, voire plus, de culture que les autres activités d’une nation, et une culture dont les nationaux eux-mêmes ne sont pas conscients. D’où la naissance d’une nouvelle discipline dont il sera question ce soir : la juritraductologie. C’est dire l’intérêt à aborder les enjeux du champ de la traduction juridique qui est de surcroît bouleversé, on l’imagine, par l’arrivée des machines et de l’intelligence artificielle, avec deux spécialistes, Sylvie Monjean-Decaudin, Professeure à l’UFR de Langues étrangères appliquées de Sorbonne-Université, qui a fondé le Centre de recherches en juritraductologie, et, est l’auteure de Traité de juritraductologie, épistémologie et méthodologie de la traduction juridique (Presses Universitaires du Septentrion), et Nejmeddine Khalfallah, Maître de conférence à l’Université de Lorraine, spécialisé en terminologie juridique arabe, qui s’intéresse à la néologie dans les langues arabes et a dirigé un livre avec Hoda Moucannas : La traduction juridique. Prudence et imprudence du traducteur (Éditions Classiques Garnier,)" #metaglossia_mundus
Enrollment now open for the Professional Certificate in Translation in Melbourne. In just one semester, you get the basics, the current technologies, and preparation for certification. Available for all languages. This course is available to domestic students only https://study.unimelb.edu.au/find/courses/graduate/professional-certificate-in-translation/ "The Professional Certificate in Translation builds a solid foundation in cross-cultural communication for further study or research. DURATION 6 months part time MODE (LOCATION) On Campus (Parkville) FEES AUD $7,600 (2024 indicative first year fee). Commonwealth supported places (CSPs) are not available Learn more ENTRY PATHWAYS Special entry options and Access Melbourne are available Learn more A Professional Certificate in Translation will enable you to build your translation skills, whether for personal development or for professional work. You will be able to prepare for certification by the National Accreditation Authority for Translators and Interpreters (NAATI). Your translation skills will be an asset in many professions such as international relations, marketing, language teaching or health communication. Completion of the Professional Certificate may also be a pathway to further language study." #metaglossia_mundus
"Mort de Paul Auster : « C’est un immense écrivain. Des années après, on se souvient encore de ses livres » Paul Auster, icône littéraire de New York dont l’œuvre est, depuis quarante ans, très appréciée en France, est mort, mardi. Notre journaliste Denis Cosnard a répondu à vos questions sur l’écrivain américain. Nous terminons ici ce tchat consacré à Paul Auster. Merci à toutes et tous pour votre lecture et votre participation. A bientôt ! Bonjour, j'ai 15 ans. Paul Auster est il un écrivain qui peut inspirer un adolscent comme moi ? Merci MatthiasMadrid Merci pour cette excellente question. Bien sûr que Paul Auster n’est pas réservé aux personnes âgées ! C’est même ce qui est prodigieux avec ses livres : ses phrases sont limpides, très fluides, il ne cherche pas des mots compliqués, et ses histoires sont si fortes qu’on se laisse très vite embarquer. Certains textes ressemblent même à des contes. C’est donc facile à lire, et cependant d’une richesse extrême. Même quand l’histoire se passe dans un petit coin de New York, les sujets – l’identité, la chance, la douleur, la tristesse, etc. – sont universels et peuvent parler à des lecteurs de tous les âges. Le résultat est parfois d’une virtuosité impressionnante, comme dans 4 3 2 1, dans lequel il raconte les quatre vies possibles d’un certain Archie Ferguson. C’est son plus long texte, plus de 1 000 pages, mais d’autres sont plus abordables, comme Moon Palace ou Mr Vertigo. Bonne lecture ! Denis Cosnard Pourriez-vous nous expliquer pour quelles raisons il avait un lien particulier avec la France ? Estelle De 1971 à 1974, Paul Auster s’est installé à Paris, et s’est immergé dans la langue et la littérature françaises. Il a notamment traduit Jacques Dupin, André Breton, Edmond Jabès, Stéphane Mallarmé, Henri Michaux ou André du Bouchet. Il était fasciné par Montaigne, « le premier à se pencher réellement sur son être d’écrivain », disait-il au Magazine littéraire en 1995. Ces années très formatrices à Paris ont définitivement forgé un lien très fort entre Paul Auster et la France. Puis, devenu écrivain, il a entretenu une relation très privilégiée avec son grand éditeur français, Actes sud. Il avait rencontré le fondateur d’Actes sud, Hubert Nyssen, lors d’un voyage de ce dernier à New York, au milieu des années 1980. Ce matin, Bertrand Py, le directeur éditorial d’Actes Sud, a souligné combien la confiance accordée par Paul Auster s’est révélée « aussi bienfaisante que déterminante » dans l’histoire de la maison. Denis Cosnard Faisait il lui même ses traductions francaises ? (Il me semble qu’il était traducteur de Sartre pour les États Unis) Thibaultpastibo Paul Auster parlait et lisait le français, effectivement, mais il ne traduisait pas lui-même ses livres. Beaucoup l’ont été par Christine Le Bœuf. D’autres, plus récemment, par Anne-Laure Tissut. Denis Cosnard Bonjour C’est un bonheur de partager sur cet immense écrivain si new-yorkais et si français. Comme Sophie Calle d’ailleurs. Ma question : Paul Auster a-t-il été listé pour le prix Nobel de littérature ? Fred Paul Auster faisait certainement partie des auteurs envisagés par les membres de l’Académie suédoise pour leur fameux prix. Chaque année, ils passent en revue des dizaines, des centaines de noms possibles. Jusqu’à quel stade de la sélection Auster est-il allé ? Cela, il faudra attendre quelques décennies pour le savoir : le Nobel n’ouvre ses archives aux chercheurs qu’au bout de cinquante ans. Denis Cosnard Des lectures de Paul Auster, j’ai le souvenir de marches dans la ville, explorations urbaines infatigables du décor, dans la lignée d’un Walter Benjamin. Paul Auster s’est il exprimé sur sa relation à la ville en général ( au delà du fait qu’il était associé à NY/Brooklyn en particulier)? Kraton Oui, Auster a beaucoup parlé de sa relation très intime à la ville et à New York en particulier. Notamment dans le très riche livre de Gérard de Cortanze Paul Auster’s New York, paru au Livre de poche en 2004 (c’est en français, contrairement à ce que pourrait laisser supposer le titre). Denis Cosnard Petit conseil lecture à ceux qui ont la chance de ne pas l’avoir lu et vont le découvrir : Le Livre des illusions Merci pour le chat Bonjour Merci pour le chat et merci à vous aussi pour cette suggestion. Le Livre des illusions, paru en 2002, est en effet un excellent roman, moins connu que d’autres, mais qui mérite vraiment d’être lu ou relu. Spécialement aujourd’hui, car c’est un formidable livre de deuil. La première phrase est simple, forte et austérienne en diable : « Tout le monde le croyait mort. » Denis Cosnard Bonjour, N’était-il pas en train de préparer un nouveau roman? Ax Paul Auster a terminé son dernier roman, Baumgartner, alors qu’il commençait à souffrir de fièvres alors mystérieuses – un cancer violent, en réalité, celui dont il est mort mardi 30 avril. Je crains qu’il n’ait rien pu mettre en chantier ensuite. « Ma santé est trop fragile : ce sera sans doute le dernier livre que j’aurai écrit », avait-il confié au Guardian en novembre 2023 en présentant Baumgartner. Denis Cosnard Peut-on considérer la solitude existentielle et le jeu du hasard, comme les thèmes pivots de son œuvre ? Patricia. Vous avez parfaitement raison, la solitude, le hasard et les jeux du destin constituent des thèmes-clés de son œuvre. On les retrouve dans les titres de ses livres, comme L’Invention de la solitude (1982) ou La Musique du hasard (1991). Ce sont toutefois loin d’être les seuls. La quête d’identité, les secrets de famille, New York, le base-ball, sont aussi des sujets qui traversent son travail. Ou encore la mémoire, l’amour, la vieillesse, le deuil et la reconstruction, les thèmes au cœur de son ultime et magnifique fiction, Baumgartner, parue tout récemment. Denis Cosnard Bonjour Denis Cosnard, Sophie Calle et Paul Auster ont créé des liens artistiques, je crois. Vous pourriez-nous en dire plus ? Muguet Vous avez raison : Paul Auster et Sophie Calle ont noué de jolis liens. On en trouve la trace sur la page de garde de son roman Léviathan (1992). Auster y « remercie tout spécialement Sophie Calle de l’avoir autorisé à mêler la réalité à la fiction ». Un des personnages du roman, Maria, qui apparaît pendant une dizaine de pages, est une artiste clairement inspirée de Sophie Calle : « Certains la disaient photographe, d’autres la qualifiaient de conceptualiste, d’autres encore voyaient en elle un écrivain, mais aucune de ces descriptions ne convenait et, tout bien considéré, je pense qu’il était impossible de la ranger dans une case. » En sens inverse, Sophie Calle s’est aussi inspirée de Paul Auster pour certains travaux. Ils ont été montrés lors d’une exposition du Centre national de la photographie de Paris intitulée « Doubles-Jeux ». C’est aussi le titre du livre de Sophie Calle qui retrace cette expérience. Denis Cosnard Pouvez-vous me dire pourquoi 30 ans après je me souviens encore de la trilogie New-yorkaise et que je ne regarde plus Central Park de la même manière ? Nico Ce que vous évoquez dit bien la réussite, la puissance narrative de Paul Auster, ce qui fait de lui un immense écrivain. Des années après, on se souvient encore de ses livres. C’est évidemment lié aux histoires très fortes qu’il met en scène, à ses personnages souvent extrêmement attachants, à sa façon d’accrocher le lecteur dès les premières lignes. Et bien sûr à son regard sur New York. Sa ville, qu’il a si bien su décrire, en particulier le quartier de Brooklyn où il habitait. On retrouve New York dans ses romans, mais aussi dans ses films. Si vous en avez l’occasion, regardez Smoke : son héros se poste chaque matin à huit heures au coin de la 3e Rue et de la VIIe Avenue à Brooklyn, et prend une photo. Toujours sous le même angle. Pour un résuiltat toujours différent… Denis Cosnard Son roman policier Fausse balle, publié sous le nom de Paul Benjamin, est il disponible en France ? JPC Mais oui, ce roman d’abord refusé a fini par être publié sous le pseudonyme de Paul Benjamin (les deux premiers prénoms d’Auster). En France, il est paru dans la fameuse collection « Série noire » de Gallimard. Vous le trouverez sous le numéro 2295. Denis Cosnard J’ai le souvenir d’un livre ( je ne sais plus le titre) où le héros (narrateur) se retrouve sans rien que des cartons dans un appartement vide et se clochardise petit à petit . La description de sa descente sociale, initiée par des problèmes d’argent, était très puissante. P. Auster a t il lui même vécu des périodes de renoncement et de volonté de socialisation ? Nath Oui, cette description ne vient pas de nulle part. Sans avoir été clochard, Paul Auster a vécu dans sa jeunesse des moments très difficiles, et des problèmes d’argent. Dans la seconde partie des années 1970, après son retour aux Etats-Unis, il vivait de très peu. Il rédigeait des articles de critique littéraire, traduisait Stéphane Mallarmé, Jean-Paul Sartre ou encore Georges Simenon, mais ses projets personnels n’aboutissaient pas. Le roman policier sur une ex-star du base-ball qu’il avait écrit sous pseudonyme avait été refusé. A la même époque, son premier mariage s’est désagrégé. A ce moment-là, il a vraiment touché le fond. Denis Cosnard PA a-t-il pris des positions politiques (sur la politique intérieure de son pays) ces derniers temps ? Mama Paul Auster a toujours été un démocrate engagé. Son avant-dernier livre, Pays de sang, constitue en lui même une forme de prise de position politique. C’est une brillante analyse de la violence aux Etats-Unis, en particulier des tueries de masse, et un réquisitoire contre la libre circulation des armes à feu. Le livre est illustré de photos prises sur les lieux des massacres. En même temps, Auster y parle de son rapport intime avec les armes, et de sa propre histoire familiale. Il raconte comment sa grand-mère paternelle a assassiné son mari en 1919, un secret de famille découvert tardivement. L’arme du crime était cachée sous le matelas du père de Paul Auster… Je garde un bon souvenir du film "Smoke" qu'il avait écrit. Quels étaient ses liens avec le cinéma ? Deuil Vous avez raison, Smoke est un excellent film, et Harvey Keitel y est parfait. Le film avait d’ailleurs obtenu l’Ours d’argent à Berlin en 1995. Dans sa jeunesse à Paris, Auster avait tenté sans succès d’entrer à l’Institut des hautes études cinématographiques. Une fois devenu un écrivain très reconnu, il a fini par réaliser son rêve de faire du cinéma. Il a aidé le réalisateur Wayne Wang à tourner deux films coup sur coup, Smoke et Brooklyn Boogie, dont il a signé les scénarios. Il a ensuite réalisé lui-même deux longs métrages : Lulu on the Bridge en 1998 et encore La Vie intérieure de Martin Frost en 2007. Par quel œuvre faut-il commencer pour se plonger dans l’univers de Paul Auster? Pauster Auler Tout est bon à prendre dans l’œuvre d’Auster, il n’y a rien à jeter. Pour démarrer, je peux vous conseiller par exemple Mr Vertigo, un roman particulièrement réussi paru en 1994. La première phrase est géniale : « J’avais 12 ans la première fois que j’ai marché sur l’eau. » On a tout de suite envie de découvrir la suite, et on ne s’arrête pas avant la dernière page de ces aventures d’un gamin des rues adopté par maître Yehudi. Bonjour La France ayant eu une place particulière pour PA, a t il eu autant de succès dans les autres pays qu’aux USA et en France? Je me sens un peu seul ce matin Bonjour Je me sens un peu seul ce matin (moi aussi !), Les Etats-Unis et la France sont, je crois, les deux pays où il a eu le plus de succès. En particulier la France, où il a vécu dans sa jeunesse et où toute son œuvre est traduite, aux éditions Actes Sud essentiellement. Paul Auster parlait d’ailleurs de la France comme de son « deuxième pays ». En 1993, le prix Médicis étranger obtenu par son roman Léviathan avait été une consécration importante pour lui. Mais il est traduit aussi dans bien d’autres langues. Merci pour cette première question, qui rappelle que Paul Auster a aussi, d’abord, été un grand lecteur de poésie et un poète lui-même. Dans sa jeunesse, lors de ses séjours à Paris, il a rencontré des poètes comme Jacques Dupin et André du Bouchet, qui l’ont marqué. Il a traduit en anglais de la poésie française. Et il a signé une demi-douzaine de recueils de poésie. Cet amour des mots, de leur sonorité, se lit à mon sens aussi dans ses romans. Il y utilise une langue très simple et belle, parfois très poétique. C’est peut-être ce qui fait au bout du compte le sortilège Auster : une langue magnifique et un art de la narration puissant. Un immense écrivain américain Poète, critique, romancier, essayiste et scénariste, le romancier était devenu célèbre avec ses récits new-yorkais peuplés de personnages marginaux et désorientés. Maître dans l’art de la narration, il est mort dans sa maison de Brooklyn mardi 30 avril dans la soirée. Le contexte Image de couverture : Paul Auster, à Lyon, le 16 janvier 2018. JEFF PACHOUD / AFP - Paul Auster, auteur américain prolifique de romans, poèmes et films propulsé sur la scène littéraire internationale par sa Trilogie new-yorkaise, est mort de complications d’un cancer du poumon à l’âge de 77 ans, mardi 30 avril, à son domicile de Brooklyn.
- Né en 1947 dans l’Etat du New Jersey, Paul Auster est devenu une icône littéraire de New York. Auteur d’une trentaine de livres, il a été traduit dans plus de quarante langues. Ce descendant de juifs ashkénazes a étudié à l’université Columbia de New York les littératures française, italienne et britannique. Après ses études, il vit à Paris de 1971 à 1975 et traduit des poètes français. L’héritage de son père mort en 1979 lui permet de se consacrer à l’écriture.
- Il s’est fait connaître en 1982 avec L’Invention de la solitude, un roman autobiographique où il tente de cerner la personnalité de son père. Le romancier perce en 1987, notamment en Europe, avec sa Trilogie new-yorkaise, un roman noir qui s’inspire du genre policier. Ecrivain vénéré en France qu’il considère comme son « deuxième pays », il reçoit le prix Médicis étranger pour le Léviathan en 1993.
- Posez vos questions sur la carrière et l’œuvre de Paul Auster à Denis Cosnard, journaliste au « Monde des livres ». Il y répondra à partir de 9 h 45."
#metaglossia_mundus
"Paul Auster, auteur américain prolifique de romans, poèmes et films propulsé sur la scène littéraire internationale par sa « Trilogie new-yorkaise », est mort de complications d’un cancer du poumon à l’âge de 77 ans, a annoncé le New York Times mardi. Paul Auster, auteur américain prolifique de romans, poèmes et films propulsé sur la scène littéraire internationale par sa « Trilogie new-yorkaise », est mort de complications d’un cancer du poumon à l’âge de 77 ans, a annoncé le New York Times mardi. Paul Auster est mort à son domicile de Brooklyn, à New York, aux États-Unis, a indiqué le quotidien, qui cite une amie du romancier, Jacki Lyden. Le plus francophile des écrivains américains est l’auteur d’une vingtaine de romans, dont « Leviathan », le « Livre des illusions » ou « 4321 » mais aussi d’essais, de recueil de poésie et de scénarios (« Smoke », « Brooklyn Boogie »). Son dernier ouvrage, « Baumgartner » est paru en France en mars. Il avait connu la célébrité dans les années quatre-vingt « avec sa réanimation postmoderne du romain noir », décrit le Times. Associé pour toujours à la « Grosse Pomme », il était en fait né dans le New Jersey, de l’autre côté du fleuve Hudson. Le journal rappelle également qu’il était particulièrement apprécié en France, où l’écrivain avait vécu jeune. Il était devenu « l’une de ces rares importations américaines accueillies par les Français comme un fils ». Dans les années quatre-vingt-dix, M. Auster avait débuté une carrière cinématographique s’occupant du scénario de Smoke et Brooklyn Boogie, deux films tournés par Wayne Wang, avant de réaliser lui-même Lulu sur le pont en 1998."" #metaglossia_mundus
30 avril 2024 "L’essai sur la pluralité linguistique et le pluralisme linguistique de Said Bennis aborde la question des enjeux de la politique linguistique au Maroc à travers les articulations entre pluralité linguistique et plurilinguisme linguistique, selon un communiqué de lecture et présentation de l’ouvrage. Selon l’auteur, la pluralité linguistique réfère à l’état naturel de coexistence et d’usage de plusieurs langues sur un même territoire, en l’occurrence le territoire du Maroc. Le pluralisme linguistique, quant à lui, est un concept qui réfère aux résultats des politiques linguistiques visant à gérer la pluralité linguistique au sein d’un territoire. Ces résultats peuvent induire des visions politiques différentes en matière de bonne gouvernance de la pluralité linguistique, notamment en rapport aux statuts des langues ( langue officielle, langue nationale, langue régionale, langue territoriale, langue minoritaire, langue communautaire, …) mais aussi relativement au paysage, à l’environnement, et au marché linguistique ( monolinguisme, bilinguisme, territorialité linguistique, régionalisation linguistique, …) . A cet égard, l’importance de l’essai de Said Bennis réside dans le fait qu’il examine les scénarios possibles des outputs d’une politique linguistique permettant d’asseoir la donne de la pluralité linguistique et de la diversité culturelle au Maroc conformément aux dispositions de la constitution de 2011 accordant le statut de langue officielle à la langue amazighe et partant induisant un bilinguisme officiel ( Arabe / amazighe) en parallèle de la création du Conseil National des Langues et de la Culture marocaine ( CNLCN). Pour répondre aux enjeux de cette nouvelle situation linguistique, Said Bennis développe une réflexion à partir de deux paliers à savoir le palier du diagnostic ( 3 chapitres)et le palier des retombées viables d’une politique linguistique appropriée au cas du Maroc ( 3 chapitres). Dans le premier palier, Said Bennispropose à cet effet de délimiter l’arsenal théorique, conceptuel et empirique adéquat ( Unilinguisme, Plurilinguisme, Régionalisation linguistique, Sécurité linguistique, Globalisation et discrimination linguistique) pour comprendre la nature de la pluralité linguistique au Maroc, étalé sur les trois premiers chapitres portant respectivement sur les Inputs de la pluralité et du pluralisme, sur l’environnement linguistique au Maroc et sur la langue amazighe et la dynamique sociétale. Dans le second palier, Said Bennis présente et discute un bon nombre d’orientations, de perspectives et de principes qui peuvent être pris en considération par la politique linguistique attendue au Maroc, notamment la bonne gouvernance de la pluralité linguistique, L’harmonisation identitaire et linguistique, l’institutionnalisation du bilinguisme officiel ( arabe – amazighe), le principe de territorialité, le principe de personnalité, la régionalisation linguistique, les langues d’école, les dangers de la dialectisation, l’identité linguistique, l’hybridité linguistique,l’urgence de la langue anglaise, … Ces perspectives ont largement été analysées dans les 3 derniers chapitres se rapportant aux configurations de la politique linguistique au Maroc, aux défis du vivre ensemble au Maroc et aux enjeux futures de la politique linguistique au Maroc. Article19.ma" #metaglossia_mundus
"One dies every two weeks, says UNESCO Brock University experts are hoping to shine light on the alarming rate at which the world’s languages are disappearing — and on the importance of preserving them. Brock Associate Professor of Modern Languages, Literatures and Cultures Jean Ntakirutimana, who speaks the central-east African languages of Kirundi, Kiswahili and Kinyarwanda as well as French and English, calls the disappearance of languages a “big loss for humanity,” as each language is an intergenerational “database of knowledge.” “Each culture has a way of understanding, and expressing, the world around them: the fauna, flora, the people, everything,” he says. “We learn how to interact with the environment around us — how to maintain and take care of it and how that environment can be beneficial for us.” Ntakirutimana notes how maps showing endangered ecosystems line up with maps where languages are in various stages of becoming extinct. Replacing local languages with dominant, foreign languages can cause confusion, Ntakirutimana says. He recalls being taught in elementary school about the four seasons in French (printemps, été, automne and hiver) — concepts he couldn’t grasp well as a child because Burundi has only a rainy and a dry season. Despite the challenges, Ntakirutimana sees signs of a linguistic revival in Africa and elsewhere. Communications technologies are evolving and increasingly connecting people who speak the same language, he says, and new languages are evolving, especially among youth. Sherri Vansickle, Assistant Professor in Brock’s Indigenous Educational Studies program, who is from Onondaga Nation, Eel Clan, says that, along with celebrating Mother Language Day, there is also a grief process for Indigenous communities mourning their language loss. “Many Indigenous people in Canada don’t speak their mother language because of Indian Residential Schools,” she says. “Every time a language speaker passes on, it’s like losing a library to the Indigenous community.” Vansickle says many residential school survivors would not teach Indigenous languages to their grandchildren as a way of protecting the next generation from the harms they endured themselves. Through the Truth and Reconciliation Commission (TRC) and its calls to action, Vansickle says there is a renewed hope Indigenous languages will be preserved. She notes that the Mohawk, Cayuga and Nishnawbe languages are taught at Brock University. “As educators, we make sure to always use Indigenous words: Skén:nen (peace), Kariwiio (power) and Kasastensera (righteousness),” she says. “Transmitting these foundational concepts to students gives us hope that future generations are learning about the formation of the confederacy.” For Associate Professor of Applied Linguistics Lynn Dempsey, educating children early in their mother tongue forms a strong base for subsequent language and brain development. Knowledge of grammar, sounds and conversational and storytelling skills children acquire in their mother tongue will transfer to other languages such as English, especially when the mother language has a similar sound system to the second language, she says. “It’s not just spoken language skills that transfer,” says Dempsey. “Research shows that skills in the mother language transfer to reading and writing in a second language. For example, early experience with books in the first language predicts reading comprehension in a second language later on.” Bilingual children seem to be better at “executive functions,” or switching attention between different aspects of a task, says Dempsey, adding they are “better at tasks that involve ‘tuning in’ to the sounds of language. “For example, they can count the number of sounds in a word better than monolingual children. Being able to ‘tune in’ to speech sounds helps children learn to read,” she says." #metaglossia_mundus
May 1, 2024 "For international parents, choosing where to educate their children is one of the most critical decisions they face. Those on short-term stays may want an English-language or international school, while those who are here for the long haul may decide to go native in the Dutch education system. At Winford Bilingual primary schools—in Amsterdam, Haarlem and The Hague—such choices are academic because they are the only truly dual language primary schools in the country. Winford has developed its own integrated English-Dutch curriculum and students learn everything from critical thinking to decision-making in two languages from day one to graduation. “All of our classrooms are multi-level and dual language,” says Joy Otto, programme director of Winford Bilingual Amsterdam and coordinator for all Winford Bilingual schools. “What is very unique about us is that we have two classrooms in one. Every classroom has two native speaking teachers, a Dutch teacher and an English teacher. “We really look more at the whole child here. We don’t group children just by their age,” says Joy. “As a small school, we have the luxury of taking 30 days to get to know each child, and then we place them where we believe they will thrive. Because I really believe that every child has a gift, and it’s our job to figure out what that gift is and to help them level up on their challenges.” The Hague Come this May, Winford Bilingual will open a new school in The Hague with a brand-new programme featuring its celebrated bilingual curriculum but also with an option for students to move to a strictly Dutch curriculum from age nine. The dual pathway empowers students to make informed decisions about their educational journey, preparing them for the next steps—which is what Winford Bilingual is all about. You can visit Winford Bilingual The Hague on June 1, when the school will hold an open house. Personal tours at all three locations are possible. Blue and Red At Winford Bilingual Amsterdam on a quiet, leafy street near the Museumplein, some 70 students fill the sunny and welcoming classrooms in a grand townhouse. At first glance, the classrooms look much like those you’d find in any other private school. But on closer inspection, you can see the Winford difference: each class is divided into a blue and a red section—the blue section for English, the red Dutch-only. “We assign language by person, location, and colour,” says Joy. “So, when a child is speaking with an English teacher, they must speak English. When they are sitting in the blue section, they also know they must speak English. And when they go to the other side of the room and it’s red, they know they need to speak Dutch. It’s very clear to them.” “It’s surprising how easily they pick up any language that that they haven’t mastered yet,” says Willem van Hoof, teacher of the 4-, 5- and 6-year-olds in the school’s Panda group and assistant director of Winford Amsterdam. Julia Bormann, the English teacher for the Eagle class of older children aged 9 to 12, agrees. “It’s amazing to see how the kids can switch from one language to the other, and they know exactly when it’s transition time and when they have to switch their brains from English to Dutch or the other way around,” she says. “It’s also really nice to see how the different age levels work together. The older ones help the smaller ones, or the other way around, and everyone’s accepted, everyone’s welcome.” “For example, now we’re talking about dinosaurs,” says Willem. “And even though we don’t teach the same exact lesson in both Dutch and English, we do touch on the same subjects, such as fossils and the eras when the dinosaurs lived. It may be on different days, but it’s within the same week, and we cover the same vocabulary. The key is communication.” Smart investment Another aspect that sets Winford apart from other private schools is its unique curriculum, based on a combination of the Dutch, British and International Primary Curriculum (IPC). Students therefore have a choice of secondary schools. They’re ready to excel in Dutch schools (where they’re automatically enrolled in the lottery system), or they can go to international schools, British or American ones. “I feel very strongly that parents pay a huge investment when they give us their children, and I want to make sure they get a good return on that investment,” says Joy. “Investing in your children now will get returns, because parents are setting them up for two areas of success from the start. Our promise is that if you come to us (at a young age), when you leave, you’ll be fluent in both languages.” Joy also emphasises that Winford is not an international school. “We’re a Dutch private school,” she says. “Most families choose us because they want their child to be integrated into the Dutch system. But life happens, so they’re not 100% sure if they’ll stay here all the way through middle or high school. They want to have that choice.” At Winford Amsterdam, some 40-45% of the families are Dutch, meaning they have one Dutch-speaking parent and speak Dutch at home. Another 40%, who Joy refers to as “transplants,” are from within the EU, while the rest hail from Britain and US. Jenni Iyoyo is responsible for marketing and admissions at all three of Winford’s bilingual schools. She says the parents reaching out to her are looking for smaller classes that provide more attention to their child. “They believe their child could do so much more if they just had the right attention,” she says. “They want to give them that chance. They also love that we teach to the child’s ability, because we have mixed abilities in every classroom.” That’s a huge selling point for many Dutch families, who sometimes feel their kids get left behind in traditional Dutch schools if they’re either too advanced or have learning challenges. Flexible vacations and hot lunches Anyone who’s ever had a child enrolled in a Dutch public school knows how strict they are regarding vacations and the fines that can be levied if you’re caught breaking the rules. But at Winford Bilingual, holiday time is flexible, meaning parents can take their kids out of school when it conveniently fits their own family schedule, something that even international schools in the Netherlands can’t offer. “Because we came in on a pilot programme, we are able to provide flexible holidays,” explains Joy. “Parents really love that, because they can spend extra time with their family and avoid all the expensive Dutch vacations.” Another unique selling point of these boutique schools is their hot lunches, made and prepared on-site and served in dining-room style. In Amsterdam, cook Anna serves up such vegetarian favourites as mozzarella and pasta bakes, vegetable biryani and Thai sweet potato and coconut soup. “It smells so good that sometimes we get distracted from working!” says Joy, whose office with Jenni is next to the dining room. For more information about all Winford Bilingual primary schools, click here." #metaglossia_mundus
"CHIJIOKE OKORIE, VUKOSI MARIVATE Summary: AI producers need to better consider the communities directly or indirectly providing the data used in AI development. Case studies explore tensions in reconciling the need for open and representative data while preserving community agency. INTRODUCTION As an ideal or a practice, openness in artificial intelligence (AI) involves sharing, transparency, reusability, and extensibility that can enable third parties to access, use, and reuse data and to deploy and build upon existing AI models. This includes access to developed datasets and AI models for purposes of auditing and oversight, which can help to establish trust and accountability in AI when done well. Certain common sayings in African languages encapsulate how issues of agency and community ownership are implicated or threatened when openness is embraced in a bid to include Africa and other parts of the Global South in discussions about the responsible use and development of AI. In the Igbo and Setswana languages, these sayings include expressions that speak to how discussions about taking (or bringing) often revolve around other people’s property. The Igbo saying wete wete ka nma n’akpa onye ozo means that people always recommend the sharing of property when such property is not theirs.1 In Setswana, the saying pelo e senang phufa, selo e a be e se sa yona essentially means that it is easy to misuse, abuse, or care less about property that does not belong to you because if something is yours, you will certainly see to it that it is taken care of. Actors in the Global North have been the primary drivers of discussions about responsible AI, and they have focused such discussions on concepts like openness, privacy, and copyright protections. However, in recent times, there have been increased efforts to amplify perspectives from underrepresented and/or unrepresented jurisdictions including ones in the Global South so they can help shape discussions about responsible AI use and development. Within this atmosphere of inclusion (referred to here as the Global South inclusion project), openness, privacy, and copyright have continued to feature as important and indispensable considerations. Chijioke Okorie Chijioke Okorie is the principal investigator and leader of Data Science Law Lab, a research group at the University of Pretoria that deploys research in law and produces evidence and policy advice to support the growth of data science research across Africa. Chijioke is an Africa correspondent at The IPKat blog, associate editor of South African Intellectual Property Law Journal, and the author of several articles on intellectual property and information justice issues in Africa. In the context of digital technology and software, the Global South inclusion project has often been underpinned by a requirement of openness. The intention has been to promote broader access and address and/or sidestep privacy and copyright issues arising from both the data needed to build AI systems and the datasets that are one outcome of building and using such systems.2 Essentially, the Global South inclusion project benefits from pushing for openness because in many instances, once data has been made open, it allows for a sidestepping of privacy and copyright issues by users of such data. However, there are more factors related to the Global South inclusion project to consider and grapple with. First, builders of AI systems need to give greater consideration to the communities directly or indirectly providing the data used in commercial and noncommercial settings for AI development. These communities may include owners of traditional cultural expressions and traditional knowledge; data scientists and AI developers from African countries working on data collection, collation, curation, and annotation; linguists working on African languages; and users who provide or upload content (data) on African languages and practices on social media and other internet platforms.3 However, while openness in developing and deploying AI models offers transparency and shared learning, it can sometimes conflict with privacy and proprietary rights. By contrast, closed models prioritize proprietary information but can limit shared innovation. Furthermore, fair use and representation—meaning fair and equitable access to and use of data and the inclusion of critical ethical considerations specific to diverse groups of people or contexts related to AI development and governance—are vital in AI, especially for those in the Global South.4 Ensuring that AI is used ethically and represents diverse populations can help improve fairness and minimize bias. This requires collective efforts, considering the broader impacts on society and on the individuals who contribute data. The act of resharing data, while crucial for collaborative innovation and improvement, complicates dialogue about the privacy, copyright, ownership, and commercialization of such data. Perhaps as a result of the mistaken view that countries in the Global South are monolithic, these dynamics are often overlooked in the framing of the Global South inclusion project. But they underscore a complex ecosystem where data is not just a resource but a bridge connecting diverse communities, each with distinct, often conflicting, interests and concerns. Vukosi Marivate Vukosi Marivate is an associate professor at the University of Pretoria, where he also holds the prestigious ABSA UP Chair of Data Science. His expertise lies in the fields of machine learning (ML) and artificial intelligence (AI), with a particular focus on natural language processing (NLP) and the development of solutions for local or low-resource languages. Co-Founder of Lelapa AI, Maskahane Research Foundation and the Deep Learning Indaba. To highlight these interests and concerns, this work features a study of the African natural language processing (NLP) community presenting insights from the work of the Masakhane Research Foundation (a distributed research organization with the mission to advance African NLP),5 Ghana NLP (an open-source initiative focused on NLP involving Ghanaian languages),6 and KenCorpus (a community-driven project to create large Kenyan language datasets).7 The experiences of this community help to ground the practical trade-offs and challenges that arise in this discipline. THE DEVELOPMENT OF AFRICAN NLP Language data representing the wide variety of spoken African languages is scarce. A continent-spanning community is emerging to address this digital data scarcity, a community composed primarily of African AI and NLP researchers interested in applying AI to solve problems prevalent on the African continent. These researchers rely heavily on the use, resharing, and reuse of African language- and context-focused data (that is, openness) to fuel their innovations, analysis, and developments in AI. In the Global South but particularly on the African continent, state institutions, public bodies (such as state broadcasters), and private organizations (such as commercial news services) are pivotal repositories of valuable data about local languages.8 These entities often express concerns about the potential commercial viability of the data they hold. For instance, in response to requests from data science researchers to use local language data from South Africa’s public service broadcaster, it was suggested that such data when used to train NLP models could be commercialized and, therefore, prior licensing arrangements must be undertaken.9 There is a struggle with the delicate balance of preserving the integrity and proprietary rights of the data (such as copyright protections) while acknowledging the necessity of accessibility to data, especially data regarding African contexts and other parts of the Global South. The African NLP and AI research community referenced in this work—Masakhane, Ghana NLP, and KenCorpus—is caught in a tough spot, juggling the need to access and share data with legal rules that protect the privacy and ownership of data. Laws like copyright and data protection can sometimes limit the sharing of information needed for innovation. There is widespread recognition that, while these laws are essential for keeping data proprietary, secure, and private,10 such laws can also make it challenging for professionals to access the data they need for their work.11 As such, a demonstration of openness—meaning, the waiver or nonretention of (some) proprietary rights as a practice—is a necessary and viable practice for counteracting these restrictions and addressing these challenges.12 Yet even though openness can help address copyright and privacy concerns, the idea that such openness is a panacea for Global South inclusion and for collaborative innovation and improvement ignores the threats that openness may present to the agency and community ownership of affected stakeholders in the Global South.13 This article also explores questions about whether existing copyright or privacy frameworks are sufficient to capture the issues of agency and community ownership that are implicated or threatened when openness is embraced. THE AFRICAN COMMUNITY LANDSCAPE OF AI DEVELOPMENT The proliferation of grassroots AI organizations across Africa directly addresses the gaps left by the swift advancement of AI on the continent, which other actors have been largely driving. Initiatives like Masakhane, which seeks to strengthen African NLP by and for Africans, recognize that impending innovations could sideline African languages while enabling inaccurate or suboptimal models to permeate the region. For example, many multilingual models claim to support African languages but are not fit for purpose for the communities they claim to serve. Incorrect translations may have life-changing effects depending on how they are used.14 This challenge has already materialized in content moderation of online services.15 Like in most AI systems, high-quality data is essential for developing NLP tools and systems, yet many African languages lack robust digital resources and are considered low-resource.16 Low-resource languages lack large monolingual or parallel corpora (collections of linguistic data in the form of written text or transcriptions of recorded speech) and/or manually crafted linguistic resources sufficient for building statistical NLP applications. Data about African languages and culture bridges connections between diverse disciplines working to advance languages. Linguists collect corpora to study languages, while community archivists document languages and culture. Journalists communicate with readers while trying to capture their perspectives. And AI researchers use data to build models. Without cross-disciplinary and cross-domain collaboration (between the areas of linguistics, journalism, and AI), communities may lose their ability to guide how their languages progress amid the AI revolution. More open communication channels between communities, researchers, private actors, and government actors are imperative for articulating societal needs and priorities in an evolving technological landscape. Communities of AI researchers with limited access to financial resources face inherent challenges in generating the data necessary for AI development. This data scarcity particularly impacts linguistic diversity, as the effects of colonialism and global power structures often sideline under-resourced languages even when they have millions of speakers. For example, most chatbots are built on high-resourced languages such as English because of the availability of data in those languages, sidelining access for people who can only speak, read, or write in an African language. To overcome this divide, grassroots NLP collectives leverage collaborative social and human capital rather than financial means. Initiatives like Masakhane, which has amassed a network of more than 2,000 African researchers actively engaged in publishing research, and the KenCorpus project unite researchers to elevate local languages. By embracing the principles of openness and transparency in sharing experiences, data, code, and resources, these communities are making remarkable strides. Masakhane, for example, has been recognized for its impact on democratizing the internet,17 while GhanaNLP’s Khaya app (which translates Ghanaian languages) has thousands of users,18 and KenCorpus has now been downloaded more than 500,000 times.19 Their approach also emphasizes participatory methodologies, with many coauthors contributing to these collective efforts. Importantly, grassroots groups make data findable, accessible, interoperable, and reusable (principles that together constitute the FAIR framework) with guidance from allies.20 In driving their languages forward on their own terms, grassroots NLP groups demonstrate that barriers to access can be overcome through inclusive cooperation and innovation. With sustainability in mind, groups such as NLP Ghana have a model where they have some of their tools available with commercial access models, while at the same time they contribute to the open resources available to all researchers as they can do so. In recognition of and support for these approaches, funders of NLP and AI data projects in the Global South should proceed from the understanding that providing financial support must serve the public good by encouraging responsible data practices. The examples above illustrate a vital need for open collaboration and knowledge sharing to harness collective human capital while building social capital within grassroots AI movements. However, a complex tension emerges when copyright restrictions limit access to existing language resources or bar the open distribution of resulting tools and models. Tensions may equally arise when AI researchers in the Global South face pressure to adopt no copyright restrictions in distributing or making available tools and models from their work. Some stakeholders prioritize financial incentives or control over linguistic assets they have developed. For example, the decision by the copyright holder of the JW300 dataset—which contains translations of biblical texts in more than 300 languages—to remove this rich dataset from the public domain has had a major impact on NLP development for African languages.21 Others such as grassroots groups may be concerned about ensuring that local communities benefit as directly as possible from linguistic and other assets developed from data about these communities. Strict proprietary limitations can severely curb the progress of these stakeholder groups in terms of the Global South inclusion project. There is an urgent need to align priorities among creators seeking reasonable returns and communities enabling access, so that more people can equitably build technologies preserving the intangible cultural heritages tied to certain languages. With compromises valuing both open innovation and systems to recognize and reward the contributions of local communities, it is possible to formulate policies nurturing advancement rather than slowing progress through legal constraints. Ultimately, ethical frameworks should promote using language technology in ways that put the public good above profits. TENSIONS BETWEEN OPENNESS AND AGENCY IN AFRICAN AI DEVELOPMENT “Wete wete ka nma n’akpa onye ozo.”—“Bring [this], bring [that] is an easy request if it is from someone else’s [not the person asking] bag/pocket.”
“Pelo e senang phufa, selo e a be e se sa yona.”—“It is easy to misuse/abuse or care less about property that does not belong to you, for if something is yours, you will certainly see to it that it is taken care of.” As indicated earlier, there are several aspects of agency and community ownership that are implicated or threatened when the norm of openness is embraced in relation to the Global South inclusion project. In the context of AI, data openness speaks to mechanisms that allow or require access to data and underlying metadata to be free (no cost) and free from restrictions that could make such data inaccessible. Enforcing, imposing, or according copyright protections to such data could impose restrictions on accessibility, given the exclusive nature of such protections. And from the perspective of privacy rights, there is a need to secure private information from public exposure. Framed in this way, these two regimes—one prioritizing copyright protections, and one prioritizing privacy rights—could be justifiably opposed to practices that make data accessible, sometimes at no cost. For example, in data donation projects, various persons contribute or donate voice data which could qualify as personal information to a platform or database. Such a database could be subject of copyright protection but some of the contents of the database are considered personal information and therefore subject of privacy rights. Openness as a practice seeks to address these accessibility issues in part through licensing mechanisms that do not assert copyright protections or restrictions to data. Openness also facilitates the findability, accessibility, interoperability, and reusability of data—the full FAIR spectrum.22 However, while the experiences of Masakhane, Ghana NLP, and KenCorpus as shared above offer evidence regarding the benefits of openness as a practice, there is also recognition (and evidence) that openness needs to be nuanced for specific contexts and should also account, for example, for community rights and agency in relation to data ownership and use. Current approaches to openness among the community of African AI researchers as highlighted above involve the use of open licensing regimes that have a viral nature. The very fact of using or reusing these datasets means consenting to the proprietary nature of the data and other terms upon which the data is made available. Some of these terms may mean that, while the proprietary nature of the data is acknowledged, such proprietary rights are given up in their entirety. For example, this is the case when licenses such as the Creative Commons’ CC0 are used. The CC0 license designed by the Creative Commons movement seeks to enable creators and owners of copyright- or database-protected content to waive those interests in their works and thereby place them as completely as possible in the public domain, so that others may freely build upon, enhance, and reuse the works for any purposes without restriction under copyright or database law.23 Sometimes, the terms of such approaches to openness treat data sources—including African AI researchers, communities with traditional cultural expressions and traditional knowledge, state institutions, public bodies (including state broadcasters), and private organizations (including commercial news services)—and their needs and interests as though they were the same. While these communities are—and in the case of the grassroots community of African AI researchers, have become—pivotal repositories of valuable local language data, their needs and interests may vary. This is an important issue. Access to data on African languages has proven difficult over the years, leading to the birth of these African NLP communities highlighted above. However, using data scarcity as the major reason to adopt existing forms of openness could have unintended consequences by solving one problem and leaving other problems unsolved. It is necessary to take a holistic look at the continent’s context and to consider the consequences for communities such as African NLP researchers and their need for African language data, indigenous communities, and the users who generate data about traditional cultural expressions. There is an inventor’s paradox to address here. In this case, the inventor’s paradox means that it would be better to solve the whole problem rather than to just solve the smaller issue of data access. Essentially, addressing the full language development problem might prove easier in the long run than just the narrow one of data access. AI innovation as it has been defined to date has tended to sideline African languages. For instance, the low-resourced state of African languages consequently leads to fewer AI products, services, and tools made for the African context. The grassroots movement is responding and seeking to counter this trend by authorizing the reproduction, reuse, and dissemination of local language data. Given these objectives, the seemingly available choices of open licensing regimes for the community of AI researchers become quite narrow. This community tends to focus on licensing regimes that allow free distribution, the making of derivative works (meaning reuse in the same or different environments), and attribution. On the other hand, communities focused on the commercial viability of the local language data in their custody would prefer a licensing regime that, while being open and permitting free access, leaves room for commercialization wherever feasible. Currently, the Creative Commons Attribution-NonCommercial license—which requires re-users of a given material to give credit to the creator and also allows such re-users to distribute, remix, adapt, and build upon the material in any medium or format, for non-commercial purposes only—is intended to leave room for commercialization. However, the extent to which commercialization is feasible, is questionable particularly for materials such as data that may be hard to track once released and used as training data or in NLP/AI models. For individuals who are members of a community recognized for specific cultural practices embodying valuable traditional cultural expressions, a preferred openness approach would be one that would not permit commercialization that excludes them from tools developed from the use of their local language data.24 In essence, depending on the approach to openness that is embraced, the agency and autonomy of some of these communities to propose alternatives may be significantly diminished. The Masakhane initiative is an appropriate example. The MakerereNLP project involved the delivery of open, accessible, and high-quality text and speech datasets for East African languages from Uganda, Tanzania, and Kenya. The datasets were comprised of corpora and speech datasets obtained from various sources including free, crowdsourced voice contributions. These datasets were licensed under a Creative Commons’ BY-SA license, which entailed giving credit to the creator. Under this license, the dataset can be used for any purpose, including commercial purposes, and adaptations or derivative data outputs must be shared under identical terms. In essence, the license allows commercial uses, which may lead to products derived from the datasets being sold for a fee to the communities who contributed voice data for free. Conversely, a commercial enterprise may feel constrained in using such outputs and investing in their further development given the requirement that they must make derivative datasets publicly available under similar terms. In the case of a CC0 license, there is no requirement to likewise share under identical terms or to attribute or acknowledge the source of a dataset, and there are no restrictions on commercial or noncommercial purposes. In such instances, the autonomy and agency of data contributors and data sources to be part of the decisionmaking processes for the (possible) varied uses of the data they have contributed may be negatively impacted. This paper does not seek to discredit the principle of openness; rather it seeks to argue for a practice of openness that addresses the concerns of a diverse range of stakeholders and that does not threaten their agency or autonomy. The experiences shared in this research show that openness has contributed to the growth of grassroot movements for AI development in Africa. However, to be meaningful, the inclusion project should consider and address the ways in which exclusion or exploitation could happen amid such inclusion attempts. There must be recognition that, while these communities share an affinity in terms of the same kinds of local language data, their interests and objectives may differ. Inherent in this recognition is also an acknowledgment of the diversity of the data sources. Conflating these realities in the choice of open licensing regimes misses the point. Having made giant strides with their grassroots movement and open sharing culture, African NLP researchers are still left with the mismatch between their adopted ideology and strategy of openness on the one hand and the diverse breadth of the region’s AI community on the other. This diverse range includes, for example, African NLP researchers, data contributors who participate in and contribute to crowdfunded data projects, commercial entities, local communities who may provide context for data, and funders who facilitate the creation of datasets. One key source of tensions is the AI commercial pipeline. It involves demands and/or pressure from many quarters to adopt a licensing regime that does not interfere with or make the commercial pipeline untenable. Similarly, tensions arise from the absence of a real choice of licenses to address the concerns stated above. Focusing on the commercial pipeline of these datasets, licenses that restrict commercial uses are not feasible. In some cases, licenses that require attribution may also not be feasible because attribution requires that users are transparent about the provenance of their data. This may be an issue for privacy considerations in particular in cases where personal information is used. Share-alike licenses (which require re-users to share their derivative outputs with the same license as the original/source material) may suffer the same fate because, although they are good for ensuring that the diverse community continues to have access, they create problems for the commercial pipeline. In light of these issues, licenses with no restrictions present the only choice. Although the need to include communities in the Global South in decisionmaking processes on AI governance and development is recognized, the reality of the experiences recounted in this paper shows that such inclusion may sometimes be at the expense of the agency and autonomy of some members of communities in the Global South. Founded on the tenets of openness, such an inclusion approach focuses on preaching “wete wete” (bring, bring) while ignoring the exclusion that may arise from commercial tools built through the use of openly available data. This is the issue that South Africa’s Constitutional Court observed in its ruling in a relevant 2022 case when it said: to avoid unfair discrimination, [the state] must treat people in the same way or make available the same entitlements. But sometimes what is required of the state is to recognise the differences between persons and to provide different or more favourable treatment to some, so as to secure non-discriminatory outcomes for all.25 CONCLUSION This research has highlighted some of the opportunities and challenges presented by considerations around openness as a way to address copyright and privacy concerns that curtail perspectives from underrepresented and unrepresented jurisdictions including in the Global South when it comes to shaping discussions about responsible AI use and development. Openness must be practiced in a manner that considers the communities directly or indirectly providing the data used in commercial and noncommercial settings for AI development. The interests of these communities may, depending on the use case, involve financial benefits, social benefits, or (mere) attribution or acknowledgment. Copyright and privacy rules may, as a result of their proprietary and rule-based nature, result in practices that discourage openness. Yet addressing the restrictive and proprietary nature of these rules through openness does not and should not mean that openness is adopted without attending to the nuances of specific concerns, contexts, and people. In adapting openness to the nuances of the contexts of Africa (and the Global South), consideration must be given to the agency and autonomy of specific stakeholders to make decisions about the uses of their data contributions, created and annotated datasets, and the needs that AI tools and development are designed to address in the first place. The intersectionality of these concerns necessitates a comprehensive approach to data governance, one that addresses the multifaceted challenges and opportunities presented by Africa’s evolving data landscape. From a regulatory standpoint, copyright and privacy laws may need internal reforms, or there may be a need for a specific sui generis piece of legislation such as the European Union has undertaken with the recently passed Artificial Intelligence Act. However, of more immediate benefit, given the protracted nature of legislative reforms, is the use of contracts and private ordering regimes. The doctrine of freedom of contract means that changes and tweaks can be made in existing open licensing regimes to address relevant challenges and harness relevant opportunities. The good news is that for private actors, they can directly make changes and tweaks in the open licensing regimes to address the challenges and harness the opportunities outlined in this paper. NOTES 1 The literal translation means “bring [this], bring [that] is an easy request if it is from someone else’s bag or pocket [not that of the person asking].” 2 One inevitable outcome of joining a forum when the meeting is already underway is that one tends to adopt existing agendas and considerations or else risks being labeled someone who wants to destroy the progress already made. 3 A 2020 European Parliament report noted, “AI builds on data that capture socio-cultural expressions represented by music, videos, images, text, and social interactions, and then makes predictions based on these profoundly non-neutral and context-specific data.” See Baptiste Caramiaux, “The Use of Artificial Intelligence in the Cultural and Creative Sectors,” European Parliament Policy Department for Structural and Cohesion Policies, 2020, https://www.europarl.europa.eu/RegData/etudes/BRIE/2020/629220/IPOL_BRI(2020)629220_EN.pdf. See also Maori Data Sovereignty Network, “What Is Maori Data Sovereignty?,” https://www.temanararaunga.maori.nz. 4 Not necessarily in the sense of copyright law. 5 Masakhane is a grassroots organization whose mission is to strengthen and spur NLP research in African languages for Africans by Africans. 6 Ghana NLP is an open-source initiative focused on NLP of Ghanaian languages and its applications to local problems. 7 The Kenya Language Corpus was founded by Maseno University, the University of Nairobi, and Africa Nazarene University early in 2021. These universities have been jointly creating a language corpus, and while using machine learning and NLP, are creating tomorrow’s African language chatbot. 8 Marivate Vukosi, “Why African Natural Language Processing Now? A View From South Africa #AfricaNLP,” in Leap 4.0: African Perspectives on the Fourth Industrial Revolution, ed. Zamanzima Mazibuko-Makena (Johannesburg, South Africa: Mapungubwe Institute for Strategic Reflection, 2021): 126. 9 “P1 Computational Research: Africa Examples, Right to Research in Africa Conf., Pretoria 23Jan2023,” YouTube video, 2:21, posted by “Recreate ZA,” January 23, 2023, accessed September 12, 2023, https://www.youtube.com/watch?v=rZ-3MHcu1oA. 10 Including through copyright ownership. 11 Benefits of openness including Creative Commons Licenses. It is also worth acknowledging that there is extensive literature on the use of openness to counteract these restrictions. 12 Chijioke Ifeoma Okorie, Multi-Sided Music Platforms and the Law: Copyright, Law and Policy in Africa (London: Taylor and Francis, November 2019) 34–36. 13 As opposed to external/for-profit extraction of data. 14 Johana Bhuiyan, “Lost in AI Translation: Growing Reliance on Language Apps Jeopardizes Some Asylum Applications,” Guardian, September 7, 2023, https://www.theguardian.com/us-news/2023/sep/07/asylum-seekers-ai-translation-apps. 15 Gabriel Nicholas and Aliya Bhatia, “Lost in Translation: Large Language Models in Non-English Content Analysis,” Center for Democracy and Technology, May 23, 2023, https://cdt.org/insights/lost-in-translation-large-language-models-in-non-english-content-analysis. 16 Alon Halevy, Peter Norvig, and Fernando Pereira, “The Unreasonable Effectiveness of Data,” IEEE Intelligent Systems 24, no. 2 (2009): 8–12, https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/35179.pdf; and Pratik Joshi, Sebastin Santy, Amar Budhiraja, Kalika Bali, and Monojit Choudhury, “The State and Fate of Linguistic Diversity and Inclusion in the NLP World,” In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, (Kerrville, Texas: Association of Computational Linguistics, July 2020), 6282–6293. 17 “Wikimedia Foundation Research Award of the Year,” Wikimedia Research, https://research.wikimedia.org/awards.html. 18 “Khaya: Translate African Languages,” GhanaNLP, https://ghananlp.org/project/khaya-android. 19 “Kencorpus: Kenyan Languages Corpus,” Harvard Dataverse, https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/6N5V1K. 20 Mark D. Wilkinson et al., “The FAIR Guiding Principles for Scientific Data Management and Stewardship,” Scientific Data 3 (2016), https://www.nature.com/articles/sdata201618. 21 “A ‘Blatant No’ From a Copyright Holder Stops Vital Linguistic Research Work in Africa,” Walled Culture, May 16, 2023, https://walledculture.org/a-blatant-no-from-a-copyright-holder-stops-vital-linguistic-research-work-in-africa. 22 Thomas Margoni and Luca Schirru, “The Role of Licensing in Data FAIRization,” Presses Universitaires du Septentrion, 2023, https://lirias.kuleuven.be/retrieve/694798. 23 Creative Commons, “CC0,” https://creativecommons.org/public-domain/cc0. 24 See, for example, Maori Data Sovereignty Network, “What Is Maori Data Sovereignty?” 25 See paragraphs 67–69 of Constitutional Court of South Africa, “Blind SA V Minister of Trade, Industry, and Competition and Others,” Constitutional Court of South Africa, 2022, https://collections.concourt.org.za/bitstream/handle/20.500.12144/36956/%5bJudgment%5d%20CCT%20320-21%20Blind%20SA.pdf?sequence=37&isAllowed=y." #metaglossia_mundus
"Do many tongues make one Europe? This programme is part of the event Europe’s Finest Hours. Please visit the main page to reserve your spot for this programme. The hour-long event will center on the theme of multilingualism and its advantages. Rather than discussing the importance, utility, and official status of languages, the focus will be on exploring how languages coexist to foster diversity, nurturing a more open-minded European community. How can multilingualism help build a more inclusive and interconnected European community? Can it be a real alternative to Euro-English? With our panel, we will explore the importance of multilingual consciousness from different perspectives – generational, professional and cultural." #metaglossia_mundus
The expression ‘non-racist’ is still missing from most of the world’s major dictionaries (Oxford, Merriam-Webster, Collins, Macquarie) but perhaps this is a term we need to embrace and use more often. Its alternative ‘anti-racism’ has now become a weapon of the Critical Race Theory mob and embraces such notions as ‘white privilege’.... #metaglossia_mundus
"Dictionaries are typically viewed as being value-neutral. But they are just as steeped in culture and prejudice as the rest of the world—and they have the power to shape what we see as “normal.” Dictionaries can reinforce sexist stereotypes through the examples they include, such as those originally found in the online version of Oxford Dictionaries for the terms rabid, shrill, and nagging. IN LATE JANUARY, one of the authors of this piece—anthropologist Michael Oman-Reagan—wanted to use the word rabid in a tweet about U.S. politics. Looking up the word in his MacBook’s dictionary, he noticed that the example given for the definition was the phrase “a rabid feminist.” Oman-Reagan posted a tweet to Oxford Dictionaries, which provides the content for MacBook dictionaries: “Hey @OxfordWords, why is ‘rabid feminist’ the usage example of ‘rabid’ in your dictionary—maybe change that?” Oxford’s social media response was mocking: “If only there were a word to describe how strongly you felt about feminism,” implying, we presume, that Oman-Reagan too was a “rabid feminist.” Frustrated, Oman-Reagan began to dig deeper into his computer’s dictionary and found a pervasive pattern of sexism. For shrill, the example was “the rising shrill of women’s voices”; for nagging, it was “a nagging wife” (now changed to “nagging parents”); for grating, “her high, grating voice”; for promiscuous, “she’s a wild, promiscuous girl”; for psyche, “I will never really fathom the female psyche.” The examples for occupations were often gendered in an archaic way—male pronouns were used to illustrate the definitions of research and doctor, while female pronouns were used to describe doing housework and the profession of nursing. Conversations using the hashtag #OxfordSexism exploded on Twitter, and media outlets throughout the English-speaking world began to report the story, followed by articles about the issue in Swedish, Indonesian, Dutch, and other languages. Oman-Reagan helped to highlight the tweets of Sarah Shulist and Lavanya Murali Proctor (the other authors of this piece), both of whom are anthropologists—Shulist with expertise in language and dictionaries and Proctor in language and gender. Both Oman-Reagan and Shulist spoke to the press. After the issue went viral, Oman-Reagan also discovered that he had not been the first to see this pattern in the Oxford Dictionaries—a year and a half earlier, in 2014, a poet named Nordette Adams had written a blog post about Oxford Dictionaries’ use of “rabid feminist,” but it had received limited attention. While most of the media stories were supportive of a feminist perspective, a large number of commenters on Twitter and various blogs were hostile to it. The debate was not just about a few words. It was about much deeper issues of sexism in language and linguistic authority—about how dictionaries are perceived, and about the nature and creation of linguistic meanings and truths. Is the description of a woman’s shrill voice sexist, or simply accurate? Are dictionaries objective, neutral reflections of language usage, or do they help to reinforce sexism? ✽ MANY OF THE online commenters defending Oxford Dictionaries’ work relied upon a “descriptivist ethos”: Dictionaries, they argued, simply describe how language is used—they do not prescribe how it should be used. A dictionary includes words because they are commonly used, not because it wants to legitimize, validate, or encourage particular ways of using them; that’s why dictionaries contain everything from the most offensive slurs to modern slang, like selfie. Like the inclusion of words and their definitions, example sentences and phrases are meant to simply reflect usage. As Katherine Connor Martin, head of U.S. dictionaries at Oxford University Press, explained on the Oxford Dictionaries blog, their published examples aren’t written by dictionary compilers; they are taken from a huge corpus of web pages, novels, newspaper articles, academic journals, blog posts, and emails, amounting to a pile of some 2.5 billion words. Oxford Dictionaries uses software to determine the “most typical manner” in which a word is used so dictionary compilers can pick the best examples. But there is something circular about the descriptivist argument, as noted in a New Yorker article about the debate: “Lexicographers say that the words and meanings they add to the dictionary have already been validated by the public’s use, but, to the public, a word’s inclusion in the dictionary is the thing that legitimizes it,” the author wrote. As University of Oxford feminist linguist Deborah Cameron notes on her blog, when the Oxford English Dictionary (OED) added cisgender to their listings in 2015, advocates hailed the move as proof that the word and all the implications behind it were valid—that it was a way of “making language more inclusive.” People rarely question the authority of dictionaries, and the Oxford University Press name carries particular weight. Oxford Dictionaries provides the default dictionary used in iOS and Android operating systems and by Apple and Google apps. The press also produces the leading historical dictionary of the English language, the OED. Oxford’s authority takes on additional force in a world where the dictionary isn’t just a heavy, cumbersome tome but an easily accessed app that is automatically downloaded onto smartphones and tablets. Every time we turn to a dictionary to illustrate a point or to prove that we understand the “true” and “accurate” meaning of a word or phrase, we reinforce the dictionary’s power as a purveyor of truth. It becomes easy to argue that women’s voices really are shrill or grating, and that feminists really are rabid about their beliefs—because it says so in the dictionary. Although the idea has been a topic of criticism for decades now, the dominant Western construct of “truth” continues to revolve around the notion of a neutral, outside observer. The public seems to perceive dictionaries as being “extrasocial”—that is, unaffected by society’s attitudes and purified of its cultural baggage. It is, however, impossible for any text to exist outside of society, as both its creation and its use are colored by cultural expectations, beliefs, and practices. Real people rooted in cultural contexts build the dictionary, read the dictionary, and interact with it. The purportedly neutral example sentences and phrases emerge from these roots and reflect what the dictionary’s builders perceive to be “normal.” Some of the commenters in this debate countered that not only were these particular examples mere repetitions of language-as-used, but also, they weren’t even sexist. They were simply true descriptions of a world in which women’s voices really are shrill and grating. ✽ EVEN IF DICTIONARIES are true representations of the real world, the selection of examples as a whole has an undeniable impact. It is hard to make a convincing argument that a dictionary that repeatedly refers to things like female secretaries and bake sales staffed by moms isn’t revealing and reinforcing a sexist bias. There are only a small number of examples used for each word in the Oxford Dictionaries apps (although there are sometimes 20 or more in the website version). Given that they are designed to exemplify the most representative use of that word, their power, relative to their quantity, is clearly disproportionate. In the end, Oxford Dictionaries’ Martin apologized for their use of “a rabid feminist,” saying it was “a poorly chosen example in that the controversial and impolitic nature of the example distracted from the dictionary’s aim of describing and clarifying meaning.” A different phrase, such as “rabid extremist” or “rabid fan,” she noted, would have done a better job. Oxford did change this example sentence on the Oxford Dictionaries website, and they quietly rewrote a lot of the other examples too: shrill is now represented in the neutrally phrased “a shrill laugh,” and while grating still includes the old example, it is balanced with the new phrase “his grating, confrontational personality.” The versions of the dictionary that interface with commonly used apps and operating systems will take time to be updated. The best examples of word usage, Martin writes, “are so ordinary as to be downright boring.” But the fact that sexism creeps into normalized, everyday spaces is precisely what is so disturbing. From a feminist perspective, examples of overt sexism may be ubiquitous, but they are anything but boring. The people who make decisions about what goes into the dictionary have the power to define not only terms like shrill but also the boundaries of normality.
"San Jose is expanding its language translation options in public meetings by using artificial intelligence through Wordly, which offers live translation in 50 languages. San Jose will be using Wordly, an artificial intelligence-based translation service, to help improve language access for non-English speaking residents in some public meetings. Photo by B. Sakura Cannestra. San Jose is expanding its language translation options in public meetings by using artificial intelligence. The city launched its partnership with Wordly, an AI-based service that offers live translation in 50 languages. Translation will be available at all San Jose City Council and council committee meetings. City officials say it’s more affordable and efficient than in-person language interpreters, but advocates want to test the service before lauding it. City Clerk Toni Taber said these services will encourage more public participation from non-English speaking residents. “If you got used to the service not being there, why would you come to the meeting?” Tauber told San José Spotlight. “I think the more people realize we have the language services, the more people will utilize them, the more people will attend the meetings.” San Jose has been working to improve its translation services since last year. The city began providing in-person human translators at meetings after an incident where Spanish-speaking residents were moved to a different room to listen to the meeting. Taber said the city budgeted $400,000 for a year of utilizing four Spanish interpreters and four Vietnamese interpreters, two online and two in person. In comparison, she said the city initially paid $82,000 for a year of Wordly’s services, including time setting up and training city employees with the software. That price may fluctuate depending on how many users the service sees and how many languages get used, and the city pays for the service per minute. Taber said even at higher rates of use, the program costs much less than in-person interpreters, which the city will provide until the end of the fiscal year. Language access is just one barrier for residents hoping to speak at city meetings. Gabriela Chavez-Lopez, executive director of the Latina Coalition of Silicon Valley, said it’s hard for residents to attend public meetings in person when they have a job that overlaps with the meeting’s time, or have other responsibilities such as taking care of their families. The AI translation service requires people to use their phone or computer, which Chavez-Lopez said could run out of battery during a multi-hour meeting. She also said literacy and access to headphones could be a barrier. Taber said the city could provide headphones, but the city’s headphones have cords and may not be able to plug into all devices. “People should be able to show up and have what they need in order to participate,” Chavez-Lopez told San José Spotlight. “It’s meeting people where they’re at because (speakers) are already coming to City Hall, they’re investing their gas, their time. So the least I think that the city can do is provide the appropriate hardware.” She added that while the AI translation improves language access, the service needs to be tested by residents in meetings to determine if it meets expectations. In addition to audio translation, the city will display two languages on screen in the council chambers. Taber said the city plans to include English for people who are hearing impaired, in addition to Spanish as the most requested language. She said this will allow limited access for people without devices. South Bay cities have been tapping Wordly for public meeting translation for the past year. Sunnyvale began using the program in June for its Human Relations Commission and used it in a city council meeting in early April. Residents can request the service in any council meeting. Santa Clara County has been working on providing simultaneous translation following a Board of Supervisors meeting in mid-April, during which public comment in Spanish was translated to English, but the meeting’s discussion in English was not translated to Spanish. Taber said she hopes San Jose’s adoption of this technology encourages other cities to use it. “We don’t have any plans to eliminate it, it’s not a pilot,” she told San José Spotlight. “I really think it’s a game changer for the public.” Contact B. Sakura Cannestra at sakura@sanjosespotlight.com or @SakuCannestra on X, formerly known as Twitter." #metaglossia_mundus
|