LLaMA 4 is coming!
According to this article, LLaMA 4 can handle around 10 million tokens, which is equivalent to the content of about 100 books. That's truly amazing.
I haven't tried this model yet, but I want to give it a try within the next few days.
When developing with Cursor, generative AI often fails to maintain the correct context and tends to forget past conversations.