E-Learning-Inclusivo (Mashup)
1.1M views | +50 today
Follow
 
Rescooped by juandoming from Edumorfosis.it
onto E-Learning-Inclusivo (Mashup)
Scoop.it!

Revista de Educación a Distancia (noviembre 2014)

Revista de Educación a Distancia (noviembre 2014) | E-Learning-Inclusivo (Mashup) | Scoop.it
El contexto educativo vive una situación de cambio ante los nuevos modelos internacionales de aprendizaje como los MOOCs (Massive Open Online Courses), las tecnologías emergentes y ante la necesidad de cambio en contextos externos (crisis) e internos (revisión de las políticas docentes). Y ese cambio necesita poner en valor el compromiso de todos los actores relacionados con la formación: alumnado, profesorado y gestores. La formación y la innovación son valores por los que todos los gobiernos apuestan, como una salida a la actual situación de crisis y como una inversión para el futuro. En contextos competitivos, como la industria y la economía, la innovación se refuerza, se gestiona y se transfiere. Sin embargo, en los contextos formativos (universitario, empresarial y no universitario) existe un gran desconocimiento de los indicadores en los que se basa la innovación educativa tanto para la transformación de modelos, procesos e interacción con el alumnado como para su aplicación en los distintos contextos. Por todo ello, es hora de unir esfuerzos y analizar de forma conjunta la situación, para integrar el área de educación con el área de las tecnologías y para integrar la visión institucional con la del profesorado. El Congreso Internacional sobre Aprendizaje, Innovación y Competitividad, CINAIC 2013, cuya primera edición se celebró en 2011, es un punto de integración cuyo objetivo es favorecer la transferencia de conocimiento sobre innovación educativa y el tránsito a la investigación educativa, en diferentes contextos y a nivel internacional. En este monográfico se presenta una selección de trabajos presentados en CINAIC 2013 como método de continuar y prolongar esa transferencia de conocimiento.

Via Edumorfosis
No comment yet.
E-Learning-Inclusivo (Mashup)
Aprendizaje con TIC basado en los aprendices.
Curated by juandoming
Your new post is loading...
Your new post is loading...
Rescooped by juandoming from Edumorfosis.it
Scoop.it!

Creativity with, or against, the machines? 

Creativity with, or against, the machines?  | E-Learning-Inclusivo (Mashup) | Scoop.it

I mentioned a few blog posts back that, for me as an educationalist, the role of technology is to give us the opportunity to spend more time in the pointy, top-end, of the educational triangle diagram (you can pick any educational triangle diagram – they all imply that the pointy bit is what we are aiming for). Creativity, or something akin to creativity, often labels the pointy bit for example, in Bloom’s taxonomy the term ‘Create’ is used. In other frameworks it relates to agency and identity – various forms of self-determination and expression.  


Via Edumorfosis
No comment yet.
Rescooped by juandoming from iGeneration - 21st Century Education (Pedagogy & Digital Innovation)
Scoop.it!

Up in the Air - Educators and AI - Detection-Discipline-Distrust via Center for Democracy & Technology

Up in the Air - Educators and AI - Detection-Discipline-Distrust via Center for Democracy & Technology | E-Learning-Inclusivo (Mashup) | Scoop.it

Via Tom D'Amico (@TDOttawa)
No comment yet.
Rescooped by juandoming from e-learning-ukr
Scoop.it!

(Re)imagining AI for Educators: How to Improve Learner-Centered Classrooms with Futuristic Possibilities

(Re)imagining AI for Educators: How to Improve Learner-Centered Classrooms with Futuristic Possibilities | E-Learning-Inclusivo (Mashup) | Scoop.it
I hope you are encouraged to experiment and (re)imagine with generative AI and see how fun it is to use a futuristic, transformative lens as an educator while we share these unique opportunities with learners. 

Via Vladimir Kukharenko
No comment yet.
Rescooped by juandoming from Educación a Distancia y TIC
Scoop.it!

CUED: Resolución de las Naciones Unidas sobre inteligencia artificial

CUED: Resolución de las Naciones Unidas sobre inteligencia artificial | E-Learning-Inclusivo (Mashup) | Scoop.it
Tomado de Universo Abierto. « Aprovechar las oportunidades de sistemas seguros, protegidos y fiables de inteligencia artificial para el des...

Via LGA
No comment yet.
Rescooped by juandoming from e-learning-ukr
Scoop.it!

(99+) Are We Facing an Algorithmic Renaissance or Apocalypse? Generative AI, ChatBots, and Emerging Human-Machine Interaction in the Educational Landscape | Aras Bozkurt - Academia.edu

This study explores the transformative potential of Generative AI (GenAI) and ChatBots in educational interaction, communication, and the broader implications of human-GenAI collaboration. By examining the related literature through data mining and

Via Vladimir Kukharenko
No comment yet.
Rescooped by juandoming from Educación a Distancia y TIC
Scoop.it!

The Best of Both Worlds: Exploring the Benefits and Challenges of Hybrid Education

The Best of Both Worlds: Exploring the Benefits and Challenges of Hybrid Education | E-Learning-Inclusivo (Mashup) | Scoop.it
While not a panacea, hybrid education can be a middle option that addresses the complaints of both in-person-only and online-only education.

Via LGA
No comment yet.
Scooped by juandoming
Scoop.it!

Desarrollamos el software preciso en procesos semánticos y metacognitivos para diseñar procesos dinámicos y flexibles de aprendizaje en Educación (Superior)…. (Educación disruptiva & IA) –

Desarrollamos el software preciso en procesos semánticos y metacognitivos para diseñar procesos dinámicos y flexibles de aprendizaje en Educación (Superior)…. (Educación disruptiva & IA) – | E-Learning-Inclusivo (Mashup) | Scoop.it
Juan Domingo Farnos Miro Dentro del trabajo de la diversidad dentro de los grupos colaborativos  necesitamos tener una transparencia básica entre todos los sujetos y los objetos de aprendizaje, nadie puede esconder nada a sus compañeros (P2P) de los contrario el proceso no se podrá llevar a buen puerto. Ello nos conducirá a la confiabilidad…
No comment yet.
Rescooped by juandoming from Edumorfosis.Work
Scoop.it!

Worried an AI is going to take your job? Here’s how to stay relevant in the Generative AI era

Worried an AI is going to take your job? Here’s how to stay relevant in the Generative AI era | E-Learning-Inclusivo (Mashup) | Scoop.it

Analysis by the International Monetary Fund found that almost 40% of all global employment may be affected by AI, and in advanced economies, the figure could be as high as 60%. But don’t be alarmed. That doesn’t mean 40 to 60% of jobs will disappear altogether. Instead, it means that AI automation is likely to take away, streamline or enhance some of the tasks associated with those jobs. For the most part, then, we're talking about the augmentation of human jobs. We’re talking about humans working alongside AI tools, not being replaced by them.

 

But still, with the many headlines about the transformative nature of generative AI, it’s no wonder people are concerned about their jobs or just concerned about being left behind by this rapidly advancing technology. In this article, we’ll explore my top tips for staying relevant in the age of generative AI.


Via Edumorfosis
No comment yet.
Rescooped by juandoming from Help and Support everybody around the world
Scoop.it!

Artificial Intelligence Resources

Artificial Intelligence Resources | E-Learning-Inclusivo (Mashup) | Scoop.it
NULLDiscover the latest advancements in Artificial Intelligence and how you can use them to your advantage. Our resources page offers a comprehensive guide to AI, from the basics to the most advanced topics.

Via Vladimir Kukharenko, Ricard Lloria
No comment yet.
Rescooped by juandoming from Tools for Teachers & Learners
Scoop.it!

Revolutionize Your Learning With Ai

Revolutionize Your Learning With Ai | E-Learning-Inclusivo (Mashup) | Scoop.it

Study Fetch transforms your powerpoints, lectures, class notes, and study guides into flashcards, quizzes, and tests with an AI tutor right by your side.


Via Nik Peachey
Nik Peachey's curator insight, March 23, 3:35 AM

I'm not sure about all the dog metaphors, but this does look like a first step towards the kind of blended learning LMS of the future. https://www.studyfetch.com/

Rescooped by juandoming from Inovação Educacional
Scoop.it!

Plataformas chinesas estão reprimindo influenciadores que vendem aulas de Inteligência Artificial

Plataformas chinesas estão reprimindo influenciadores que vendem aulas de Inteligência Artificial | E-Learning-Inclusivo (Mashup) | Scoop.it

Ao longo do último ano, alguns influenciadores chineses ganharam milhões de dólares vendendo pequenas lições em vídeo sobre IA, lucrando com os receios das pessoas sobre o impacto ainda pouco claro da nova tecnologia nos seus meios de subsistência. 
Mas as plataformas em que prosperaram começaram a virar-se contra eles. Apenas algumas semanas atrás, o WeChat e o Douyin começaram a suspender, remover ou restringir suas contas. Embora os influenciadores destas plataformas tenham transformado a ansiedade das pessoas em tráfego e lucros durante muito tempo, as ações mais recentes mostram como as plataformas sociais chinesas estão a tentar conter os danos antes que estes vão longe demais. 
A reação começou no mês passado, quando os estudantes reclamaram furiosamente nas redes sociais sobre a superficialidade dos cursos, dizendo que estavam muito aquém das promessas educacionais feitas sobre eles. 
“Paguei 198 RMB (US$ 27,50) e os três primeiros cursos não tinham conteúdo real. Trata-se de incentivar as pessoas a continuarem pagando 1.980 RMB pelo próximo prato”, postou Bessie, uma usuária chinesa do site de mídia social Xiaohongshu, sobre sua experiência. Os cursos foram criados por Li Yizhou, um empreendedor em série que se tornou mentor de startups que, apesar de não ter experiência em IA, começou a postar sobre como explicar a IA e aumentar a ansiedade após o lançamento do ChatGPT em novembro de 2022.
Li vendeu seu pacote de curso básico por US$ 27,50 e um curso avançado por 10 vezes esse preço. A oferta mais barata continha 40 vídeos aulas, a maioria dos quais com cerca de 10 minutos de duração. O curso de Li consistiu em tutoriais de ferramentas generativas específicas de IA, conversas com executivos de empresas chinesas de IA e introduções a tópicos não relacionados, como como gerenciar seu tempo de maneira mais eficaz. 
Suas aulas foram um enorme sucesso comercial. De acordo com o site de análise de dados de mídia social Feigua, eles foram vendidos mais de 250 mil vezes no ano passado, o que poderia ter gerado mais de US$ 6 milhões em receitas. 
Li não é o único influenciador que, apesar de não ter experiência em IA, viu uma oportunidade de negócio para acalmar a ansiedade das pessoas em relação à IA com soluções rápidas. Há também “Teacher He”, um influenciador com mais de 7 milhões de seguidores que até recentemente falava principalmente sobre marketing e finanças pessoais, e Zhang Shitong, também seguido por milhões, cujos vídeos habituais misturam economia básica com conspirações sensacionais como a negação do 11 de setembro. Esses criadores também ofereceram aulas de IA para iniciantes a um preço semelhante ao de Li.
Além das reclamações sobre qualidade, os compradores relataram que era difícil obter reembolso quando mudavam de ideia. Bessie disse ao MIT Technology Review que recebeu um reembolso porque se inscreveu antecipadamente, mas outros que solicitaram reembolso mais de uma semana após a compra foram negados. Um site da comunidade de IA com sede em Pequim também acusou Li de se apropriar de seus modelos gratuitos contribuídos por usuários e vendê-los com fins lucrativos como parte de sua oferta de cursos. 
No final de fevereiro, as plataformas que hospedavam essas videoaulas começaram a atender às reclamações. Todas as aulas de Li e outros gurus de IA foram removidas das redes sociais e sites de comércio eletrônico chineses. Li não postou em nenhum de seus canais de mídia social desde que foi suspenso no final de fevereiro. Outros criadores como “Teacher He” e Zhang Shitong também permaneceram em silêncio.
Li e “Teacher He” não responderam a uma pergunta da mídia enviada pela MIT Technology Review . Mas um representante do cliente que trabalha para Zhang Shitong disse que a equipe processa todos os pedidos de reembolso em 12 horas e que foi decisão da própria equipe não postar nada nas últimas três semanas.
No Douyin, a versão chinesa do TikTok, a conta de Li, que tinha mais de 3 milhões de seguidores, agora está oculta dos resultados da pesquisa. Os canais WeChat, outra plataforma popular de vídeos curtos, impediram que Li e outros criadores semelhantes conseguissem novos seguidores na última semana de fevereiro. Outras plataformas menores também tomaram medidas. Zhishi Xingqiu, uma plataforma semelhante ao Patreon que foi usada por muitos influenciadores para vender acesso a comunidades focadas em IA, agora bloqueou a busca por palavras-chave como “AI”, “Li Yizhou” ou “Sora”.
Mas nenhuma das plataformas especificou quais regras os gurus violaram. Embora eles possam ter prometido demais em seu marketing, é difícil dizer se suas atividades realmente foram qualificadas como “fraudes”. Douyin e WeChat não quiseram comentar suas decisões.
No entanto, há sinais de que as restrições poderão ser revertidas. Embora as plataformas de mídia social chinesas muitas vezes excluam permanentemente as contas de usuários que acreditam estar desrespeitando as regras, esses criadores de cursos de IA mantiveram suas contas em todas as plataformas. No WeChat, depois de cerca de duas semanas impedidos de receber novos seguidores, os criadores recuperaram silenciosamente essa capacidade em meados de março. Em Douyin, a conta de Li foi ocultada dos resultados de pesquisa no aplicativo, mas seus vídeos anteriores ainda podem ser encontrados acessando diretamente sua página de perfil. 
Até agora, o governo chinês não abordou diretamente o fenómeno nem deu a sua posição oficial. O governo tem controlado fortemente a indústria de livestreaming nos últimos anos para censurar a forma como os influenciadores agem e publicam , e as plataformas chinesas definem as suas próprias regras em conformidade, por vezes antes das ordens do governo, para mostrar que estão a fazer a sua parte na regulação de conteúdos.
Mesmo que os criadores e as suas lições tenham sido removidos online, ainda há muitos chineses interessados ​​em aceder a estas lições. Nas redes sociais, algumas pessoas agora estão revendendo vídeos piratas dos cursos de Li por meio do compartilhamento de arquivos, provavelmente sem a permissão de Li. Agora, em vez de US$ 27,50, as pessoas podem gastar alguns dólares para acessar o pacote completo do curso.


Via Inovação Educacional
No comment yet.
Scooped by juandoming
Scoop.it!

Utilizamos los «transformers» como estructura de red neuronal para mejorar la Educación (superior) y la empresa, dentro de la Educación disruptiva & IA. –

Utilizamos los «transformers» como estructura de red neuronal para mejorar la Educación (superior) y la empresa, dentro de la Educación disruptiva & IA. – | E-Learning-Inclusivo (Mashup) | Scoop.it
Juan Domingo Farnós Libro: https://www.amazon.com/Attention-All-You-Need-Game-Changing-ebook/dp/B0BVWN185G ((Libro que significo el inicio y la potenciación de los "transformers para mejorar la IA en cualquier materia)) Los transformadores en inteligencia artificial son modelos de aprendizaje profundo que se han vuelto extremadamente populares y efectivos en una variedad de tareas de procesamiento de lenguaje natural (NLP, por sus siglas…
No comment yet.
Rescooped by juandoming from E-Learning-Inclusivo (Mashup)
Scoop.it!

¿La tecnología implica educación? ¿Cómo las asociamos en los proceso propios del Siglo XXI dentro del nuevo paradigma que establece la Educación Disruptiva? –

¿La tecnología implica educación? ¿Cómo las asociamos en los proceso propios del Siglo XXI dentro del nuevo paradigma que establece la Educación Disruptiva? – | E-Learning-Inclusivo (Mashup) | Scoop.it
Juan Domingo Farnós Queremos saber si de por si la TECNOLOGÍA, implica EDUCACIÓN, obviamente nunca nos hemos cuestionado lo contrario pero tampoco lo hemos hecho si en este momento la tecnología ya somos nosotros. Por otra parte falta dejar claro que la tecnología (digital, Inteligencia Artificial, “metaversos”, blockchain…) no educa, decir esto sería una auténtica…
No comment yet.
Rescooped by juandoming from Edumorfosis.it
Scoop.it!

The Invisible Revolution: Wearable AI and Education 

The Invisible Revolution: Wearable AI and Education  | E-Learning-Inclusivo (Mashup) | Scoop.it

Before we hit the panic button, there’s a bright side. Wearable AI could finally break the cycle of “memorize facts, sit the test”. Why? Because when students can subtly access any snippet of information, traditional tests become pointless. Even if we suspect only a single student has this tech, it’s time to shift our approach. Soon, we won’t be able to tell who has invisible AI assistance and who doesn’t.

Devices like Meta Ray-Ban and Brilliant Labs Frame AI glasses are more than mere “cheating tools”. They force us to dismantle outdated assessment models that reward rote memorization of easily retrievable facts. As the concept of “open-book” becomes the ever-present reality, how we assess students needs to evolve.


Via Edumorfosis
No comment yet.
Rescooped by juandoming from Help and Support everybody around the world
Scoop.it!

How Education Service Agencies transform Data Fragmentation to Data Integration 

How Education Service Agencies transform Data Fragmentation to Data Integration  | E-Learning-Inclusivo (Mashup) | Scoop.it

Recently, EdSurge spoke with ESA representatives to learn more about the importance of collaboration for making data interoperability a reality across education agencies and edtech providers. Without interoperability, data silos emerge, making it difficult for stakeholders to access and use the full range of capabilities offered by different edtech products. Adopting a data standard, such as the Ed-Fi Data Standard, enables education agencies to integrate multiple systems and tools, share data securely and leverage technology effectively to improve teaching, learning and administrative processes.


Via Edumorfosis, Ricard Lloria
No comment yet.
Scooped by juandoming
Scoop.it!

(PDF) PROPUESTA DE EDUCACIÓN DISRUPTIVA & INTELIGENCIA ARTIFICIAL PARA TRANSFORMAR LA UNIVERSIDAD HACIA EL SIGLO XXI ((II) | juan domingo farnós - Academia.edu

(PDF) PROPUESTA DE EDUCACIÓN DISRUPTIVA & INTELIGENCIA ARTIFICIAL PARA TRANSFORMAR LA UNIVERSIDAD HACIA EL SIGLO XXI ((II) | juan domingo farnós - Academia.edu | E-Learning-Inclusivo (Mashup) | Scoop.it
La Educación Disruptiva en la universidad implica la reevaluación y transformación de los modelos tradicionales de enseñanza y aprendizaje. Inspirada en las teorías Juan Domingo Farnós , esta aproximación busca romper con las estructuras
No comment yet.
Rescooped by juandoming from Help and Support everybody around the world
Scoop.it!

AI-Powered Learning Design: Mapping the Jagged Frontier

AI-Powered Learning Design: Mapping the Jagged Frontier | E-Learning-Inclusivo (Mashup) | Scoop.it
Practical strategies for leveraging AI’s strengths & avoiding its weaknesses

Via Vladimir Kukharenko, Ricard Lloria
No comment yet.
Rescooped by juandoming from gpmt
Scoop.it!

Utiliser l’IA générative pour accompagner les apprenants

Utiliser l’IA générative pour accompagner les apprenants | E-Learning-Inclusivo (Mashup) | Scoop.it

Utiliser l’IA générative pour accompagner les apprenants n’est pas l’usage auquel les enseignants, les formateurs et les OF pensent en premier. 

Pour autant, les LLM peuvent se révéler une aide précieuse, en particulier pour ceux qui ont plus de mal à désinvestir la transmission pour mieux investir le soutien à l’apprentissage.

 


Via Cap Métiers Nouvelle-Aquitaine , Elena Pérez, michel verstrepen
No comment yet.
Rescooped by juandoming from Metaglossia: The Translation World
Scoop.it!

Large language models use a surprisingly simple mechanism to retrieve some stored knowledge

Large language models use a surprisingly simple mechanism to retrieve some stored knowledge | E-Learning-Inclusivo (Mashup) | Scoop.it

Researchers find large language models use a simple mechanism to retrieve stored knowledge when they respond to a user prompt. These mechanisms can be leveraged to see what the model knows about different subjects and possibly to correct false information it has stored.

Researchers demonstrate a technique that can be used to probe a model to see what it knows about new subjects.
 
Adam Zewe | MIT News
Publication Date:
March 25, 2024
Researchers from MIT and elsewhere found that complex large language machine-learning models use a simple mechanism to retrieve stored knowledge when they respond to a user prompt. The researchers can leverage these simple mechanisms to see what the model knows about different subjects, and also possibly correct false information that it has stored.

 

Large language models, such as those that power popular artificial intelligence chatbots like ChatGPT, are incredibly complex. Even though these models are being used as tools in many areas, such as customer support, code generation, and language translation, scientists still don’t fully grasp how they work.

In an effort to better understand what is going on under the hood, researchers at MIT and elsewhere studied the mechanisms at work when these enormous machine-learning models retrieve stored knowledge.

They found a surprising result: Large language models (LLMs) often use a very simple linear function to recover and decode stored facts. Moreover, the model uses the same decoding function for similar types of facts. Linear functions, equations with only two variables and no exponents, capture the straightforward, straight-line relationship between two variables.

The researchers showed that, by identifying linear functions for different facts, they can probe the model to see what it knows about new subjects, and where within the model that knowledge is stored.

Using a technique they developed to estimate these simple functions, the researchers found that even when a model answers a prompt incorrectly, it has often stored the correct information. In the future, scientists could use such an approach to find and correct falsehoods inside the model, which could reduce a model’s tendency to sometimes give incorrect or nonsensical answers.

“Even though these models are really complicated, nonlinear functions that are trained on lots of data and are very hard to understand, there are sometimes really simple mechanisms working inside them. This is one instance of that,” says Evan Hernandez, an electrical engineering and computer science (EECS) graduate student and co-lead author of a paper detailing these findings.

Hernandez wrote the paper with co-lead author Arnab Sharma, a computer science graduate student at Northeastern University; his advisor, Jacob Andreas, an associate professor in EECS and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL); senior author David Bau, an assistant professor of computer science at Northeastern; and others at MIT, Harvard University, and the Israeli Institute of Technology. The research will be presented at the International Conference on Learning Representations.

Finding facts

Most large language models, also called transformer models, are neural networks. Loosely based on the human brain, neural networks contain billions of interconnected nodes, or neurons, that are grouped into many layers, and which encode and process data.

Much of the knowledge stored in a transformer can be represented as relations that connect subjects and objects. For instance, “Miles Davis plays the trumpet” is a relation that connects the subject, Miles Davis, to the object, trumpet.

As a transformer gains more knowledge, it stores additional facts about a certain subject across multiple layers. If a user asks about that subject, the model must decode the most relevant fact to respond to the query.

If someone prompts a transformer by saying “Miles Davis plays the. . .” the model should respond with “trumpet” and not “Illinois” (the state where Miles Davis was born).

“Somewhere in the network’s computation, there has to be a mechanism that goes and looks for the fact that Miles Davis plays the trumpet, and then pulls that information out and helps generate the next word. We wanted to understand what that mechanism was,” Hernandez says.

The researchers set up a series of experiments to probe LLMs, and found that, even though they are extremely complex, the models decode relational information using a simple linear function. Each function is specific to the type of fact being retrieved.

For example, the transformer would use one decoding function any time it wants to output the instrument a person plays and a different function each time it wants to output the state where a person was born.

The researchers developed a method to estimate these simple functions, and then computed functions for 47 different relations, such as “capital city of a country” and “lead singer of a band.”

While there could be an infinite number of possible relations, the researchers chose to study this specific subset because they are representative of the kinds of facts that can be written in this way.

They tested each function by changing the subject to see if it could recover the correct object information. For instance, the function for “capital city of a country” should retrieve Oslo if the subject is Norway and London if the subject is England.

Functions retrieved the correct information more than 60 percent of the time, showing that some information in a transformer is encoded and retrieved in this way.

“But not everything is linearly encoded. For some facts, even though the model knows them and will predict text that is consistent with these facts, we can’t find linear functions for them. This suggests that the model is doing something more intricate to store that information,” he says.

Visualizing a model’s knowledge

They also used the functions to determine what a model believes is true about different subjects.

In one experiment, they started with the prompt “Bill Bradley was a” and used the decoding functions for “plays sports” and “attended university” to see if the model knows that Sen. Bradley was a basketball player who attended Princeton.

“We can show that, even though the model may choose to focus on different information when it produces text, it does encode all that information,” Hernandez says.

They used this probing technique to produce what they call an “attribute lens,” a grid that visualizes where specific information about a particular relation is stored within the transformer’s many layers.

Attribute lenses can be generated automatically, providing a streamlined method to help researchers understand more about a model. This visualization tool could enable scientists and engineers to correct stored knowledge and help prevent an AI chatbot from giving false information.

In the future, Hernandez and his collaborators want to better understand what happens in cases where facts are not stored linearly. They would also like to run experiments with larger models, as well as study the precision of linear decoding functions.

“This is an exciting work that reveals a missing piece in our understanding of how large language models recall factual knowledge during inference. Previous work showed that LLMs build information-rich representations of given subjects, from which specific attributes are being extracted during inference. This work shows that the complex nonlinear computation of LLMs for attribute extraction can be well-approximated with a simple linear function,” says Mor Geva Pipek, an assistant professor in the School of Computer Science at Tel Aviv University, who was not involved with this work.

This research was supported, in part, by Open Philanthropy, the Israeli Science Foundation, and an Azrieli Foundation Early Career Faculty Fellowship.


Via Charles Tiayon
Charles Tiayon's curator insight, March 25, 10:31 PM

"Researchers find large language models use a simple mechanism to retrieve stored knowledge when they respond to a user prompt. These mechanisms can be leveraged to see what the model knows about different subjects and possibly to correct false information it has stored.

Researchers demonstrate a technique that can be used to probe a model to see what it knows about new subjects.
 
Adam Zewe | MIT News
Publication Date:
March 25, 2024
Researchers from MIT and elsewhere found that complex large language machine-learning models use a simple mechanism to retrieve stored knowledge when they respond to a user prompt. The researchers can leverage these simple mechanisms to see what the model knows about different subjects, and also possibly correct false information that it has stored.

 

Large language models, such as those that power popular artificial intelligence chatbots like ChatGPT, are incredibly complex. Even though these models are being used as tools in many areas, such as customer support, code generation, and language translation, scientists still don’t fully grasp how they work.

In an effort to better understand what is going on under the hood, researchers at MIT and elsewhere studied the mechanisms at work when these enormous machine-learning models retrieve stored knowledge.

They found a surprising result: Large language models (LLMs) often use a very simple linear function to recover and decode stored facts. Moreover, the model uses the same decoding function for similar types of facts. Linear functions, equations with only two variables and no exponents, capture the straightforward, straight-line relationship between two variables.

The researchers showed that, by identifying linear functions for different facts, they can probe the model to see what it knows about new subjects, and where within the model that knowledge is stored.

Using a technique they developed to estimate these simple functions, the researchers found that even when a model answers a prompt incorrectly, it has often stored the correct information. In the future, scientists could use such an approach to find and correct falsehoods inside the model, which could reduce a model’s tendency to sometimes give incorrect or nonsensical answers.

“Even though these models are really complicated, nonlinear functions that are trained on lots of data and are very hard to understand, there are sometimes really simple mechanisms working inside them. This is one instance of that,” says Evan Hernandez, an electrical engineering and computer science (EECS) graduate student and co-lead author of a paper detailing these findings.

Hernandez wrote the paper with co-lead author Arnab Sharma, a computer science graduate student at Northeastern University; his advisor, Jacob Andreas, an associate professor in EECS and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL); senior author David Bau, an assistant professor of computer science at Northeastern; and others at MIT, Harvard University, and the Israeli Institute of Technology. The research will be presented at the International Conference on Learning Representations.

Finding facts

Most large language models, also called transformer models, are neural networks. Loosely based on the human brain, neural networks contain billions of interconnected nodes, or neurons, that are grouped into many layers, and which encode and process data.

Much of the knowledge stored in a transformer can be represented as relations that connect subjects and objects. For instance, “Miles Davis plays the trumpet” is a relation that connects the subject, Miles Davis, to the object, trumpet.

As a transformer gains more knowledge, it stores additional facts about a certain subject across multiple layers. If a user asks about that subject, the model must decode the most relevant fact to respond to the query.

If someone prompts a transformer by saying “Miles Davis plays the. . .” the model should respond with “trumpet” and not “Illinois” (the state where Miles Davis was born).

“Somewhere in the network’s computation, there has to be a mechanism that goes and looks for the fact that Miles Davis plays the trumpet, and then pulls that information out and helps generate the next word. We wanted to understand what that mechanism was,” Hernandez says.

The researchers set up a series of experiments to probe LLMs, and found that, even though they are extremely complex, the models decode relational information using a simple linear function. Each function is specific to the type of fact being retrieved.

For example, the transformer would use one decoding function any time it wants to output the instrument a person plays and a different function each time it wants to output the state where a person was born.

The researchers developed a method to estimate these simple functions, and then computed functions for 47 different relations, such as “capital city of a country” and “lead singer of a band.”

While there could be an infinite number of possible relations, the researchers chose to study this specific subset because they are representative of the kinds of facts that can be written in this way.

They tested each function by changing the subject to see if it could recover the correct object information. For instance, the function for “capital city of a country” should retrieve Oslo if the subject is Norway and London if the subject is England.

Functions retrieved the correct information more than 60 percent of the time, showing that some information in a transformer is encoded and retrieved in this way.

“But not everything is linearly encoded. For some facts, even though the model knows them and will predict text that is consistent with these facts, we can’t find linear functions for them. This suggests that the model is doing something more intricate to store that information,” he says.

Visualizing a model’s knowledge

They also used the functions to determine what a model believes is true about different subjects.

In one experiment, they started with the prompt “Bill Bradley was a” and used the decoding functions for “plays sports” and “attended university” to see if the model knows that Sen. Bradley was a basketball player who attended Princeton.

“We can show that, even though the model may choose to focus on different information when it produces text, it does encode all that information,” Hernandez says.

They used this probing technique to produce what they call an “attribute lens,” a grid that visualizes where specific information about a particular relation is stored within the transformer’s many layers.

Attribute lenses can be generated automatically, providing a streamlined method to help researchers understand more about a model. This visualization tool could enable scientists and engineers to correct stored knowledge and help prevent an AI chatbot from giving false information.

In the future, Hernandez and his collaborators want to better understand what happens in cases where facts are not stored linearly. They would also like to run experiments with larger models, as well as study the precision of linear decoding functions.

“This is an exciting work that reveals a missing piece in our understanding of how large language models recall factual knowledge during inference. Previous work showed that LLMs build information-rich representations of given subjects, from which specific attributes are being extracted during inference. This work shows that the complex nonlinear computation of LLMs for attribute extraction can be well-approximated with a simple linear function,” says Mor Geva Pipek, an assistant professor in the School of Computer Science at Tel Aviv University, who was not involved with this work.

This research was supported, in part, by Open Philanthropy, the Israeli Science Foundation, and an Azrieli Foundation Early Career Faculty Fellowship."

#metaglossia_mundus

Rescooped by juandoming from Vocational education and training - VET
Scoop.it!

The future of education lies in the development of skills, not Victorian rote learning, says Tutors International

The future of education lies in the development of skills, not Victorian rote learning, says Tutors International | E-Learning-Inclusivo (Mashup) | Scoop.it
Tutors International today issued a comment that highlights the transformative impact of personalised learning, emphasising its role in empowering students to go beyond traditional styles of classroom learning to push educational boundaries and become confident, lifelong learners.

OXFORD, England, March 13, 2024 /PRNewswire/ -- In a recent statement, Tutors International, a leading provider of bespoke tutoring services, has shed light on the significant advantages of personalised learning in fostering academic success and personal growth among students. The conventional one-size-fits-all approach of classroom education, while foundational, often misses the mark in catering to the individual needs and learning styles of each student. Personalised learning, particularly through private tutoring, presents a paradigm shift that recognises and nurtures the unique potential within every learner.

Via Canadian Vocational Association / Association canadienne de la formation professionnelle
No comment yet.
Rescooped by juandoming from Help and Support everybody around the world
Scoop.it!

Beyond Burnout: AI as an Academic Ally in the “Publish or Perish” Culture

Beyond Burnout: AI as an Academic Ally in the “Publish or Perish” Culture | E-Learning-Inclusivo (Mashup) | Scoop.it
The potential for AI assistance to enhance traditional database searches is apparent, signifying a transformative shift in how researchers can retrieve information. 

Via Vladimir Kukharenko, Ricard Lloria
No comment yet.
Rescooped by juandoming from Edumorfosis.it
Scoop.it!

Embracing the future: How technology is revolutionizing Education

Embracing the future: How technology is revolutionizing Education | E-Learning-Inclusivo (Mashup) | Scoop.it

In today’s fast-paced world, technological advancements have permeated every aspect of our lives, transforming the way we live, work, and learn. Nowhere is this more evident than in the field of education, where innovative technologies are reshaping traditional teaching methods and opening up new avenues for learning and skill development.


Via Edumorfosis
No comment yet.
Rescooped by juandoming from Vocational education and training - VET
Scoop.it!

Canada. The Future of Higher Education in Canada:15 Challenging Issues

Canada. The Future of Higher Education in Canada:15 Challenging Issues | E-Learning-Inclusivo (Mashup) | Scoop.it
In this series of posts for teachonline, we explore the possible, probable and preferred futures of higher education in Canada. In this post we examine the key issues that must be addressed, focusing attention on the need for a comprehensive rethink of the eco-system. Just dealing with one issue (such as funding or curricula) without dealing with the others would have unintended consequences for any institution. Colleges, universities and Indigenous institutes are actively exploring options and are beginning to address these complex, important issues.
 

These 15 challenges will shape the future of higher education in Canada:

Indigenization – There are three issues at play here. 
What higher education can do to promote and embed the idea that we are all treaty people (Truth and Reconciliation)
How more Indigenous learners can succeed in higher education (there is a strong link between these issues and that of decolonialization of the curriculum)
Whether established, dedicated Indigenous institutions are adequately supported and enabled to thrive 
Funding and support for teaching and learning and assessment – Without a change in funding and support for teaching, learning and assessment, the work of colleges and universities is fiscally unsustainable. Many institutions are currently vulnerable to modest reductions in the enrolment of international students. This may require a rethink at a fundamental level of funding models and shared services.
Precarity of the instructor class – The number of tenured and tenure-track (university) or permanent instructor (college) positions has been steadily declining. About 60% of all teaching in higher education in Canada is undertaken by sessional instructors (gig workers) and that number is rising. This links to issues of quality and training as well as to the underfunding of Canada’s higher education system. The situation is similar for non-academic staff at our institutions.
Growing demand raises issues of capacity – It’s uncertain how each province will cope with a significant increase in demand from domestic students between now and 2028. 
Adding new institutions – If new institutions are created, it’s unclear what the implicit design for learning assumptions will be.
Growth funding for existing institutions – If growth funding occurs, it’ll be vital to figure out the design for learning assumptions and how sustainable the funding will be. Some of the smaller institutions, often located in rural or northern communities, would benefit from investment and growth if this funding was sustainable — otherwise, growth may make them more vulnerable.
The role of the private sector (both private colleges and private universities) – Peering into the future means exploring what kind of public-private partnerships might be created.
The role of hybrid and online and distance learning in designs for growth – It is yet to be determined whether growth will be based on a significant expansion of hybrid and online and distance learning or on expanding classroom-based learning.
The role of AI in the expansion of access – It’s possible that some jurisdictions may experiment with an AI-based institution offering courses and programs leading to certificates, Red Seal certification, diplomas and/or degrees.
Purpose and plans – Governments have been shifting the purpose and focus of the work of colleges, Indigenous institutes and universities toward meeting labour market pressures and demands. Although some modest impact can be seen in very specific areas, the skills gap is now worse than it was when this pressure began, which may add new purpose and pressures on higher education.
Work-integrated learning – 40% of university students and 60% of college students in Canada currently undertake work-integrated learning in some form during their studies. How this develops and grows is vital.
New curricula – Given the changes taking place in the nature of work, the deployment of technology and new fields of work (e.g., new forms of construction, new approaches to health through genetics) institutions face new curricular challenges. What’s unclear is whether new programs of study will displace existing ones, and how fast institutions can respond to emerging opportunities given declining funding and increasing government control.
Lack of faculty development and professional learning – As became clear in the pandemic, faculty skills in instructional design, reimagining assessment and the effective design of engaged and authentic learning need upgrading. 
Imagineering and innovation – In highly unionized environments, an important consideration is how creative institutions and leadership can be. It’s also important to examine how much appetite exists for significant change.
Risk management – Willingness to take risks and preparedness for adverse events (e.g., forest fires, floods, natural disasters) are vital considerations for leaders and institutions. 
In responding to these issues, institutions will need to refocus their purpose, core activities, business processes and relationships within and beyond their communities. It is a challenging task but an important one. 

Tools and Trends
Insights Into the Future and Promise of Online Learning

Via Canadian Vocational Association / Association canadienne de la formation professionnelle
No comment yet.
Rescooped by juandoming from Learning & Technology News
Scoop.it!

What do teachers think about AI in Education? Our new report lifts the lid. | Trinity College London

What do teachers think about AI in Education? Our new report lifts the lid. | Trinity College London | E-Learning-Inclusivo (Mashup) | Scoop.it

According to the findings, a significant portion of teachers, constituting two-thirds (63%), believe that generic AI tools are too unreliable and inaccurate for effective classroom use. However, amidst the rising prevalence of generative AI tools among students, over one in ten teachers suggest that schoolwork grading needs to be reassessed due to an assumed utilisation of AI.


Via Nik Peachey
Rescooped by juandoming from E-Learning-Inclusivo (Mashup)
Scoop.it!

Universidad 4.0, Educación 4.0 y Aprendizaje móvil: La interrelación y complementariedad de estos elementos reflejan la sinergia necesaria para lograr una transformación educativa efectiva en una s...

Universidad 4.0, Educación 4.0 y Aprendizaje móvil: La interrelación y complementariedad de estos elementos reflejan la sinergia necesaria para lograr una transformación educativa efectiva en una s... | E-Learning-Inclusivo (Mashup) | Scoop.it
Juan Domingo Farnos Miro «La universidad debe estar desactualizada, es la falta de intimidad lo que la hace atractiva», dice un profesor de Germanistik de Mannheim Según Juan Domingo Farnós, la universidad 4.0 debe ser transdisciplinaria, disruptiva y orientada a resolver los problemas del siglo 21. Él enfatiza la necesidad de un cambio urgente en la universidad,…

Via Ramon Aragon, juandoming
No comment yet.