La coopérative pédagogique
Get Started for FREE
Sign up with Facebook Sign up with Twitter
I don't have a Facebook or a Twitter account
Tags |
---|
Scooped by
juandoming
onto E-Learning-Inclusivo (Mashup) |
Your new post is loading...
Your new post is loading...
Scoop.it!
¿Hacia dónde va la Universidad Latinoamericana en el siglo XXI? ¿Cómo afecta la globalización a las universidades? Via LGA
Scoop.it!
Before we hit the panic button, there’s a bright side. Wearable AI could finally break the cycle of “memorize facts, sit the test”. Why? Because when students can subtly access any snippet of information, traditional tests become pointless. Even if we suspect only a single student has this tech, it’s time to shift our approach. Soon, we won’t be able to tell who has invisible AI assistance and who doesn’t. Via Edumorfosis
Scoop.it!
From
www
Recently, EdSurge spoke with ESA representatives to learn more about the importance of collaboration for making data interoperability a reality across education agencies and edtech providers. Without interoperability, data silos emerge, making it difficult for stakeholders to access and use the full range of capabilities offered by different edtech products. Adopting a data standard, such as the Ed-Fi Data Standard, enables education agencies to integrate multiple systems and tools, share data securely and leverage technology effectively to improve teaching, learning and administrative processes. Via Edumorfosis, Ricard Lloria
Scoop.it!
La Educación Disruptiva en la universidad implica la reevaluación y transformación de los modelos tradicionales de enseñanza y aprendizaje. Inspirada en las teorías Juan Domingo Farnós , esta aproximación busca romper con las estructuras
Scoop.it!
Practical strategies for leveraging AI’s strengths & avoiding its weaknesses Via Vladimir Kukharenko, Ricard Lloria
Scoop.it!
Utiliser l’IA générative pour accompagner les apprenants n’est pas l’usage auquel les enseignants, les formateurs et les OF pensent en premier. Pour autant, les LLM peuvent se révéler une aide précieuse, en particulier pour ceux qui ont plus de mal à désinvestir la transmission pour mieux investir le soutien à l’apprentissage.
Via Cap Métiers Nouvelle-Aquitaine , Elena Pérez, michel verstrepen
Scoop.it!
From
news
Researchers find large language models use a simple mechanism to retrieve stored knowledge when they respond to a user prompt. These mechanisms can be leveraged to see what the model knows about different subjects and possibly to correct false information it has stored. Researchers demonstrate a technique that can be used to probe a model to see what it knows about new subjects. Adam Zewe | MIT News Publication Date: March 25, 2024Researchers from MIT and elsewhere found that complex large language machine-learning models use a simple mechanism to retrieve stored knowledge when they respond to a user prompt. The researchers can leverage these simple mechanisms to see what the model knows about different subjects, and also possibly correct false information that it has stored.
Large language models, such as those that power popular artificial intelligence chatbots like ChatGPT, are incredibly complex. Even though these models are being used as tools in many areas, such as customer support, code generation, and language translation, scientists still don’t fully grasp how they work. In an effort to better understand what is going on under the hood, researchers at MIT and elsewhere studied the mechanisms at work when these enormous machine-learning models retrieve stored knowledge. They found a surprising result: Large language models (LLMs) often use a very simple linear function to recover and decode stored facts. Moreover, the model uses the same decoding function for similar types of facts. Linear functions, equations with only two variables and no exponents, capture the straightforward, straight-line relationship between two variables. The researchers showed that, by identifying linear functions for different facts, they can probe the model to see what it knows about new subjects, and where within the model that knowledge is stored. Using a technique they developed to estimate these simple functions, the researchers found that even when a model answers a prompt incorrectly, it has often stored the correct information. In the future, scientists could use such an approach to find and correct falsehoods inside the model, which could reduce a model’s tendency to sometimes give incorrect or nonsensical answers. “Even though these models are really complicated, nonlinear functions that are trained on lots of data and are very hard to understand, there are sometimes really simple mechanisms working inside them. This is one instance of that,” says Evan Hernandez, an electrical engineering and computer science (EECS) graduate student and co-lead author of a paper detailing these findings. Hernandez wrote the paper with co-lead author Arnab Sharma, a computer science graduate student at Northeastern University; his advisor, Jacob Andreas, an associate professor in EECS and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL); senior author David Bau, an assistant professor of computer science at Northeastern; and others at MIT, Harvard University, and the Israeli Institute of Technology. The research will be presented at the International Conference on Learning Representations. Finding facts Most large language models, also called transformer models, are neural networks. Loosely based on the human brain, neural networks contain billions of interconnected nodes, or neurons, that are grouped into many layers, and which encode and process data. Much of the knowledge stored in a transformer can be represented as relations that connect subjects and objects. For instance, “Miles Davis plays the trumpet” is a relation that connects the subject, Miles Davis, to the object, trumpet. As a transformer gains more knowledge, it stores additional facts about a certain subject across multiple layers. If a user asks about that subject, the model must decode the most relevant fact to respond to the query. If someone prompts a transformer by saying “Miles Davis plays the. . .” the model should respond with “trumpet” and not “Illinois” (the state where Miles Davis was born). “Somewhere in the network’s computation, there has to be a mechanism that goes and looks for the fact that Miles Davis plays the trumpet, and then pulls that information out and helps generate the next word. We wanted to understand what that mechanism was,” Hernandez says. The researchers set up a series of experiments to probe LLMs, and found that, even though they are extremely complex, the models decode relational information using a simple linear function. Each function is specific to the type of fact being retrieved. For example, the transformer would use one decoding function any time it wants to output the instrument a person plays and a different function each time it wants to output the state where a person was born. The researchers developed a method to estimate these simple functions, and then computed functions for 47 different relations, such as “capital city of a country” and “lead singer of a band.” While there could be an infinite number of possible relations, the researchers chose to study this specific subset because they are representative of the kinds of facts that can be written in this way. They tested each function by changing the subject to see if it could recover the correct object information. For instance, the function for “capital city of a country” should retrieve Oslo if the subject is Norway and London if the subject is England. Functions retrieved the correct information more than 60 percent of the time, showing that some information in a transformer is encoded and retrieved in this way. “But not everything is linearly encoded. For some facts, even though the model knows them and will predict text that is consistent with these facts, we can’t find linear functions for them. This suggests that the model is doing something more intricate to store that information,” he says. Visualizing a model’s knowledge They also used the functions to determine what a model believes is true about different subjects. In one experiment, they started with the prompt “Bill Bradley was a” and used the decoding functions for “plays sports” and “attended university” to see if the model knows that Sen. Bradley was a basketball player who attended Princeton. “We can show that, even though the model may choose to focus on different information when it produces text, it does encode all that information,” Hernandez says. They used this probing technique to produce what they call an “attribute lens,” a grid that visualizes where specific information about a particular relation is stored within the transformer’s many layers. Attribute lenses can be generated automatically, providing a streamlined method to help researchers understand more about a model. This visualization tool could enable scientists and engineers to correct stored knowledge and help prevent an AI chatbot from giving false information. In the future, Hernandez and his collaborators want to better understand what happens in cases where facts are not stored linearly. They would also like to run experiments with larger models, as well as study the precision of linear decoding functions. “This is an exciting work that reveals a missing piece in our understanding of how large language models recall factual knowledge during inference. Previous work showed that LLMs build information-rich representations of given subjects, from which specific attributes are being extracted during inference. This work shows that the complex nonlinear computation of LLMs for attribute extraction can be well-approximated with a simple linear function,” says Mor Geva Pipek, an assistant professor in the School of Computer Science at Tel Aviv University, who was not involved with this work. This research was supported, in part, by Open Philanthropy, the Israeli Science Foundation, and an Azrieli Foundation Early Career Faculty Fellowship. Via Charles Tiayon
Charles Tiayon's curator insight,
March 25, 10:31 PM
"Researchers find large language models use a simple mechanism to retrieve stored knowledge when they respond to a user prompt. These mechanisms can be leveraged to see what the model knows about different subjects and possibly to correct false information it has stored. Researchers demonstrate a technique that can be used to probe a model to see what it knows about new subjects. Adam Zewe | MIT News Publication Date: March 25, 2024Researchers from MIT and elsewhere found that complex large language machine-learning models use a simple mechanism to retrieve stored knowledge when they respond to a user prompt. The researchers can leverage these simple mechanisms to see what the model knows about different subjects, and also possibly correct false information that it has stored.
Large language models, such as those that power popular artificial intelligence chatbots like ChatGPT, are incredibly complex. Even though these models are being used as tools in many areas, such as customer support, code generation, and language translation, scientists still don’t fully grasp how they work. In an effort to better understand what is going on under the hood, researchers at MIT and elsewhere studied the mechanisms at work when these enormous machine-learning models retrieve stored knowledge. They found a surprising result: Large language models (LLMs) often use a very simple linear function to recover and decode stored facts. Moreover, the model uses the same decoding function for similar types of facts. Linear functions, equations with only two variables and no exponents, capture the straightforward, straight-line relationship between two variables. The researchers showed that, by identifying linear functions for different facts, they can probe the model to see what it knows about new subjects, and where within the model that knowledge is stored. Using a technique they developed to estimate these simple functions, the researchers found that even when a model answers a prompt incorrectly, it has often stored the correct information. In the future, scientists could use such an approach to find and correct falsehoods inside the model, which could reduce a model’s tendency to sometimes give incorrect or nonsensical answers. “Even though these models are really complicated, nonlinear functions that are trained on lots of data and are very hard to understand, there are sometimes really simple mechanisms working inside them. This is one instance of that,” says Evan Hernandez, an electrical engineering and computer science (EECS) graduate student and co-lead author of a paper detailing these findings. Hernandez wrote the paper with co-lead author Arnab Sharma, a computer science graduate student at Northeastern University; his advisor, Jacob Andreas, an associate professor in EECS and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL); senior author David Bau, an assistant professor of computer science at Northeastern; and others at MIT, Harvard University, and the Israeli Institute of Technology. The research will be presented at the International Conference on Learning Representations. Finding facts Most large language models, also called transformer models, are neural networks. Loosely based on the human brain, neural networks contain billions of interconnected nodes, or neurons, that are grouped into many layers, and which encode and process data. Much of the knowledge stored in a transformer can be represented as relations that connect subjects and objects. For instance, “Miles Davis plays the trumpet” is a relation that connects the subject, Miles Davis, to the object, trumpet. As a transformer gains more knowledge, it stores additional facts about a certain subject across multiple layers. If a user asks about that subject, the model must decode the most relevant fact to respond to the query. If someone prompts a transformer by saying “Miles Davis plays the. . .” the model should respond with “trumpet” and not “Illinois” (the state where Miles Davis was born). “Somewhere in the network’s computation, there has to be a mechanism that goes and looks for the fact that Miles Davis plays the trumpet, and then pulls that information out and helps generate the next word. We wanted to understand what that mechanism was,” Hernandez says. The researchers set up a series of experiments to probe LLMs, and found that, even though they are extremely complex, the models decode relational information using a simple linear function. Each function is specific to the type of fact being retrieved. For example, the transformer would use one decoding function any time it wants to output the instrument a person plays and a different function each time it wants to output the state where a person was born. The researchers developed a method to estimate these simple functions, and then computed functions for 47 different relations, such as “capital city of a country” and “lead singer of a band.” While there could be an infinite number of possible relations, the researchers chose to study this specific subset because they are representative of the kinds of facts that can be written in this way. They tested each function by changing the subject to see if it could recover the correct object information. For instance, the function for “capital city of a country” should retrieve Oslo if the subject is Norway and London if the subject is England. Functions retrieved the correct information more than 60 percent of the time, showing that some information in a transformer is encoded and retrieved in this way. “But not everything is linearly encoded. For some facts, even though the model knows them and will predict text that is consistent with these facts, we can’t find linear functions for them. This suggests that the model is doing something more intricate to store that information,” he says. Visualizing a model’s knowledge They also used the functions to determine what a model believes is true about different subjects. In one experiment, they started with the prompt “Bill Bradley was a” and used the decoding functions for “plays sports” and “attended university” to see if the model knows that Sen. Bradley was a basketball player who attended Princeton. “We can show that, even though the model may choose to focus on different information when it produces text, it does encode all that information,” Hernandez says. They used this probing technique to produce what they call an “attribute lens,” a grid that visualizes where specific information about a particular relation is stored within the transformer’s many layers. Attribute lenses can be generated automatically, providing a streamlined method to help researchers understand more about a model. This visualization tool could enable scientists and engineers to correct stored knowledge and help prevent an AI chatbot from giving false information. In the future, Hernandez and his collaborators want to better understand what happens in cases where facts are not stored linearly. They would also like to run experiments with larger models, as well as study the precision of linear decoding functions. “This is an exciting work that reveals a missing piece in our understanding of how large language models recall factual knowledge during inference. Previous work showed that LLMs build information-rich representations of given subjects, from which specific attributes are being extracted during inference. This work shows that the complex nonlinear computation of LLMs for attribute extraction can be well-approximated with a simple linear function,” says Mor Geva Pipek, an assistant professor in the School of Computer Science at Tel Aviv University, who was not involved with this work. This research was supported, in part, by Open Philanthropy, the Israeli Science Foundation, and an Azrieli Foundation Early Career Faculty Fellowship." #metaglossia_mundus
Scoop.it!
Tutors International today issued a comment that highlights the transformative impact of personalised learning, emphasising its role in empowering students to go beyond traditional styles of classroom learning to push educational boundaries and become confident, lifelong learners. Via Canadian Vocational Association / Association canadienne de la formation professionnelle
Scoop.it!
The potential for AI assistance to enhance traditional database searches is apparent, signifying a transformative shift in how researchers can retrieve information. Via Vladimir Kukharenko, Ricard Lloria
Scoop.it!
From
cxotoday
In today’s fast-paced world, technological advancements have permeated every aspect of our lives, transforming the way we live, work, and learn. Nowhere is this more evident than in the field of education, where innovative technologies are reshaping traditional teaching methods and opening up new avenues for learning and skill development. Via Edumorfosis
Scoop.it!
From
teachonline
In this series of posts for teachonline, we explore the possible, probable and preferred futures of higher education in Canada. In this post we examine the key issues that must be addressed, focusing attention on the need for a comprehensive rethink of the eco-system. Just dealing with one issue (such as funding or curricula) without dealing with the others would have unintended consequences for any institution. Colleges, universities and Indigenous institutes are actively exploring options and are beginning to address these complex, important issues. Via Canadian Vocational Association / Association canadienne de la formation professionnelle
Scoop.it!
What do teachers think about AI in Education? Our new report lifts the lid. | Trinity College LondonAccording to the findings, a significant portion of teachers, constituting two-thirds (63%), believe that generic AI tools are too unreliable and inaccurate for effective classroom use. However, amidst the rising prevalence of generative AI tools among students, over one in ten teachers suggest that schoolwork grading needs to be reassessed due to an assumed utilisation of AI. Via Nik Peachey
Nik Peachey's curator insight,
March 21, 2:26 AM
A new report from Trinity Education https://www.trinitycollege.com/news/viewarticle/ai-in-education
Scoop.it!
Juan Domingo Farnos Miro «La universidad debe estar desactualizada, es la falta de intimidad lo que la hace atractiva», dice un profesor de Germanistik de Mannheim Según Juan Domingo Farnós, la universidad 4.0 debe ser transdisciplinaria, disruptiva y orientada a resolver los problemas del siglo 21. Él enfatiza la necesidad de un cambio urgente en la universidad,… Via Ramon Aragon, juandoming |
Scoop.it!
From
daveowhite
I mentioned a few blog posts back that, for me as an educationalist, the role of technology is to give us the opportunity to spend more time in the pointy, top-end, of the educational triangle diagram (you can pick any educational triangle diagram – they all imply that the pointy bit is what we are aiming for). Creativity, or something akin to creativity, often labels the pointy bit for example, in Bloom’s taxonomy the term ‘Create’ is used. In other frameworks it relates to agency and identity – various forms of self-determination and expression. Via Edumorfosis
Scoop.it!
Scoop.it!
I hope you are encouraged to experiment and (re)imagine with generative AI and see how fun it is to use a futuristic, transformative lens as an educator while we share these unique opportunities with learners. Via Vladimir Kukharenko
Scoop.it!
Tomado de Universo Abierto. « Aprovechar las oportunidades de sistemas seguros, protegidos y fiables de inteligencia artificial para el des... Via LGA
Scoop.it!
This study explores the transformative potential of Generative AI (GenAI) and ChatBots in educational interaction, communication, and the broader implications of human-GenAI collaboration. By examining the related literature through data mining and Via Vladimir Kukharenko
Scoop.it!
While not a panacea, hybrid education can be a middle option that addresses the complaints of both in-person-only and online-only education. Via LGA
Scoop.it!
Juan Domingo Farnos Miro Dentro del trabajo de la diversidad dentro de los grupos colaborativos necesitamos tener una transparencia básica entre todos los sujetos y los objetos de aprendizaje, nadie puede esconder nada a sus compañeros (P2P) de los contrario el proceso no se podrá llevar a buen puerto. Ello nos conducirá a la confiabilidad…
Scoop.it!
Analysis by the International Monetary Fund found that almost 40% of all global employment may be affected by AI, and in advanced economies, the figure could be as high as 60%. But don’t be alarmed. That doesn’t mean 40 to 60% of jobs will disappear altogether. Instead, it means that AI automation is likely to take away, streamline or enhance some of the tasks associated with those jobs. For the most part, then, we're talking about the augmentation of human jobs. We’re talking about humans working alongside AI tools, not being replaced by them.
But still, with the many headlines about the transformative nature of generative AI, it’s no wonder people are concerned about their jobs or just concerned about being left behind by this rapidly advancing technology. In this article, we’ll explore my top tips for staying relevant in the age of generative AI. Via Edumorfosis
Scoop.it!
NULLDiscover the latest advancements in Artificial Intelligence and how you can use them to your advantage. Our resources page offers a comprehensive guide to AI, from the basics to the most advanced topics. Via Vladimir Kukharenko, Ricard Lloria
Scoop.it!
Study Fetch transforms your powerpoints, lectures, class notes, and study guides into flashcards, quizzes, and tests with an AI tutor right by your side. Via Nik Peachey
Nik Peachey's curator insight,
March 23, 3:35 AM
I'm not sure about all the dog metaphors, but this does look like a first step towards the kind of blended learning LMS of the future. https://www.studyfetch.com/
Scoop.it!
Ao longo do último ano, alguns influenciadores chineses ganharam milhões de dólares vendendo pequenas lições em vídeo sobre IA, lucrando com os receios das pessoas sobre o impacto ainda pouco claro da nova tecnologia nos seus meios de subsistência. Via Inovação Educacional
Scoop.it!
Juan Domingo Farnós Libro: https://www.amazon.com/Attention-All-You-Need-Game-Changing-ebook/dp/B0BVWN185G ((Libro que significo el inicio y la potenciación de los "transformers para mejorar la IA en cualquier materia)) Los transformadores en inteligencia artificial son modelos de aprendizaje profundo que se han vuelto extremadamente populares y efectivos en una variedad de tareas de procesamiento de lenguaje natural (NLP, por sus siglas… |