If "AI in the browser" was of any true value to users, it would be shipped as an extension. Maybe a paid extension.
Then the market could decide its value.
Forcing it into the browser without consent is an admission that it does not provide users with value and could not survive without a heavy thumb on the scales.
@kboyd I would argue that even local machine translation, OCR, speech-to-text models (like Whisper) or image description models are still in the realm of "AI", tho they are faster and much less wasteful, use less resources like for training and do have more instant uses (like accessibility), I'd prefer that over a proprietary service. Maybe it's kind of models Mozilla would use in the browser, but their marketing and PR teams are bad and use words that some AI bros use.
Edit: Wrong wording
@kboyd I also would argue that extensions may not be the best solution for some features especially those used for #accessibility as this would make discoverability and use much harder for some users, and extensions can only use certain public APIs/ABIs and their features are limited or can be even removed on a whim (see, #Manifestv3)
@natsume_shokogami a project can ship with default extensions. The point of them, in that case, is that they can be removed if the user is not interested.
If extensions can't deliver functionality for some features, that seems like an area for improvement to the extension api (which i recognize can be very difficult in some cases, but those cases shouldn't affect chatbot buzzword BS)
@natsume_shokogami I know it is unnecessary, but as of the past few weeks, they've already added it. That's why I'm complaining. They did it in a way that I disagree with.
@kboyd Which one, I didn't follow much about Mozilla
@natsume_shokogami I refuse to click the button, so I don't directly know, but it has the reek of chatbot.
@kboyd Sorry, I had some errors in my posts so I edited it, but it's note that the issues of models like LLMs (ChatGPT, Gemini) and image generation models (like Midjourney, DALL-E,...) stems from the fact they require lots of data and energy to train of them, and also require lots of energy for inference as well, and lack of transparency (especially given they (the AI companies) want to hoard all improvements of the models to themselves even they scrape others' data without permission).