where do you download the GGUF of gemma?
When I go to the Kaggle Gemma page, I see 'model variations: Keras, PyToirch, Transformers, Gemma C++, Tensor, MaxText, Pax, Flax.
for Keras it extracts to:
model.weights.h5 - 16GB file. - this file does not open in LM Tools.
where do you download the GGUF of gemma? When I go to the Kaggle Gemma page, I see 'model variations: Keras, PyToirch, Transformers, Gemma C++, Tensor, MaxText, Pax, Flax. for Keras it extracts to: model.weights.h5 - 16GB file. - this file does not open in LM Tools.
[https://huggingface.co/lmstudio-ai/gemma-2b-it-GGUF/tree/main](https://huggingface.co/lmstudio-ai/gemma-2b-it-GGUF/tree/main)
Thanks!!
How censored ?
very
What's the prompt template you are using? The one recommended by Google?
Is it as woke as it's twin - Gemini?
Lmao, woke = censored? Or is woke just anything you don't like?
Gemini = Sony Wokeman™
I can’t wait for you all to die off, fucking boomers
So much anger. Play some Wokemon™ - go catch them al!
Its surprisingly quick in responding, even quicker than tinyllama 1.1B model. How is this possible?
it uses grouped query attention, like Mistral.