Can LLM replace abstraction capability?
I'm going to throw in a sudden thought, but when "ability to abstract is important", "ability to abstract" requires three mutually intertwined abilities: A: ability to find "things that are similar if you ignore the branches and leaves", B: ability to find common patterns among multiple similar cases, and C: ability to ignore the branches and leaves. C: the ability to ignore the branches and leaves, nishio The problem is that these are interdependent and can only be learned by those who repeat them iteratively. However, if we create appropriate prompts for each ability, LLM can be implemented in the future. Well, with the current LLM, I can't imagine it working out a bit.
tokoroten Abstraction is required when you bring a list of items to feed the LLM. I'm a little stuck because if you don't abstract it, it becomes the whole event.
tokoroten It's like conceptnet, abstracted into components and related elements, but still not enough. For example, to draw the conclusion that "the competition for newspapers is canned coffee (at train station kiosks)
Need information that they are sold at the same price range at the station kiosks and consumed by office workers to pass the time.
---
This page is auto-translated from /nishio/抽象化能力をLLMが代替できるか? using DeepL. If you looks something interesting but the auto-translated English is not good enough to understand it, feel free to let me know at @nishio_en. I'm very happy to spread my thought to non-Japanese readers.