How do LLM servers handle contextual data? Is the context passed on a prefix to a stateless machine? (Would mean a lot of tokens have to reprocessed during a session) Or a separate LLM instance is created and maintained for an active session? (Expensive and inefficient)
good batching and tensor parallelization prob
the session is tied to a gpu cluster. It would actually be quite inefficient to switch gpu cluster to another one mid session, but its needed in a failure scenario