T O P

  • By -

jugalator

Claude isn't really correct here. Each instance is only self-contained in the sense that Claude is being served different chat contexts (your chat conversation history for that particular chat session). Users are not being given entirely different and isolated Claude instances though. That would be way too costly for Anthropic and not scale at all. So, multiple users are sharing an AI and the AI is responding according to the chat context window. Claude is also wrong in that identical responses would be given from identical contexts. This is false due to the chat temperature setting of the AI model introducing an element of randomness even despite identical chat contexts. A user can easily test this by simply asking an AI an identical question in two different chats. The AI should normally give a different answer to the respective, identical queries.


e-scape

Yes, you are right. Claude should have worded it differently, because identical context is more than the prompt, it is also temperature and overall attention based context


Original_Finding2212

Ask it what if a user built a chat with single thread , based on the API - what about this interaction and continuity?


e-scape

It would still be the same, because when you interact with the API (I haven't tried claude's api only openai), you would still need to send the whole conversation each time, and 1000s of other users would still interact with the same API endpoint.


Original_Finding2212

So the model may not be a single point of view, but your experience will. Yes, you could move to an open source private model hosted locally - but does it make any change?


[deleted]

[удалено]


RedstnPhoenx

Lol humans don't have a single linear experience at all.