Week 5
Finished
Llama2 + llama-index [Source]
Last week, we tried llama-index RAG ability with its default ChatGPT-3.5-Turbo model. This time we will integrate llama2 model into the llama-index framework to enhance llama2’s RAG capability.
We can call and download the llama2 models with its weight data from Huggingface
, and this can be done directly with llama-index library as the transformers
library has been wrapped as llama_index.llms.HuggingFaceLLM
, according to LlamaIndex - Huggingface LLMs. Also, this method helps ensuring that llama2 model runs on GPU, which saves time up to 90% (it takes around 10 mins to run a one-round conversation on CPU).
The workflow is similar to the previous RAG process, except that we will create a llama2 model first, setup all of its parameters, and pass it to llama-index so that the VectorStoreIndex
component of llama-index would be able to use llama2 as its llm
.
Analysis of frame blending examples generated by llama2
Please refer to Meeting Slides.
Other Messages From The Weekly Meeting
Easy-to-omit blendings
Some frame blending examples are so conventionalized that we don’t even recognize, and the process of blending exists in backstage cognition. Take this sentence as an example:
The family's influence on the child's understanding of physics was evident in the way they taught him about the laws of motion.
influence
->flow
frame. Spatial image: there’s a bowl or a container, and you put something in it, or there’s a depression, and water flows into it.- While
teaching
is not physically putting something in a bowl or water flowing into a depression, or electrons flowing over a wire. - These two frames constitute a frame blending, but it is so historically deep that we don’t notice it.
It is interesting that this system is nominating a frame blend of the sort that the student would never see, making up a good material in class for professors to explain. And in fact, frame blending is largely how we’re able to understand our world. It’s happening all the time, but then we can show the blending by analysis and by showing exceptionally examples.
To-do List For The Future
Try a wide range of prompts
Should play around and try different kinds of prompts to test the abilities of Llama2 and Llama-index since they have the access to the FrameNet dataset. That might save a lot of people a lot of time. Instead of going to frame net and looking up all of the frames and learning a lot.
For example:
>>> What frames have to do with buying and selling / color / ...?
>>> {part of the cross-based mapping analysis},
please do the rest of this frame blending analysis.
>>> ... (a lot more)
Think of evaluation matrics
Think about a more systematic way (e.g. matrics) to evaluate the answer, like how accurate and how complete.
Develop an algorithm
An algorithm that takes input such as two categories of frames (e.g. Event
and Entity
), then construct a lot of prompts, substituting different specific frames into the prompts.
Leveraging Chain of Thought (CoT)
- Read papers
- Think of how it can be integrated into prompts.