You can interact with the new Phi-3 vision model on this page (no login required): https://ai.azure.com/explore/models/Phi-3-vision-128k-instru...
"We are introducing Phi Silica which is built from the Phi series of models and is designed specifically for the NPUs in Copilot+ PCs. Windows is the first platform to have a state-of-the-art small language model (SLM) custom built for the NPU and shipping inbox. Phi Silica API along with OCR, Studio Effects, Live Captions, Recall User Activity APIs will be available in Windows Copilot Library in June. More APIs like Vector Embedding, RAG API, Text Summarization will be coming later."
2024: the year of personal computers with neural processing units running small language models
How do NPU's work? Who builds them and how are they built? Are they capable of running a variety of SLM-like firmware?
The initial model release had a terrible, frequent, issue with emitting the wrong "end of message" token, or never emitting one.[1] That is a very serious issue that breaks chat.
The ones from today still have this issue.[2]
Beyond that, they've been pushing new ONNX features enabling LLMs via Phi for about a month now. The ONNX runtime that supports it still isn't out, much less the downstream integration of it into the iOS/Android runtimes. Heck, the Python package for it isn't supported anywhere but Windows.
It's absolutely wild to me that MS is pulling this stuff with ~0 discussion or reputation repercussions.
I'm a huge ONNX fan and bet a lot on it, it works great. It was clear to me about 4 months ago that Wintel's "AI PC" buildup meant "ONNX x newer Phi"
It is very frustrating to see an extremely late rush, propped up by potemkin blog posts that I have to waste time to find out are just straight up lying. Burnt a lot of goodwill that they worked hard to earn.
I am virtually certain that the new Windows AI features previewed about yesterday are going to land horribly if they actually try to land them this year.
[1] https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-gguf... [2] https://x.com/jpohhhh/status/1793003272187351195
It looks like the Phi-3 Vision model isn't available in GGUF or ONNX. I was hoping there was a GGUF I could use with llamafile.
The bigger news is that Phi-3-Small, Phi-3-Medium and Phi-3-Vision were finally released
I installed Phi:medium last night on my Mac using Ollama and, subjectively, it looks good. I was surprised of the claim the it was better than mistral-8x7B.
I largely ignore benchmarks now, but on the other hand, while trying many models myself is easy for simple tests, really using a LLM for an application is a lot of work.
Slightly off topic: what’s the reasonably smallest LLM model i can use to do language processing and rewriting of a large library of word documents? For the purposes of querying information and regurgitating out summaries or detailed information?
My use case is very simple: take 1000 word documents filled with two to three pages of information and pictures. And then output a set of requested information via prompting. Is there something off the shelf? Or do I have to make this?
Wow, actually this cookbook is really bad? I expected something like the OpenAI or Anthropic cookbooks, but this seems to be some AI generated low-quality content without any code examples or interesting examples?
The Phi-3 models are great though, especially the vision model has great potential for low latency applications (like robotics?)...
https://huggingface.co/collections/microsoft/phi-3-6626e15e9..., all of these models except Phi-3 mini are new.
Was playing around with this model - why does it return 2 or 3 responses when I ask it for one? I asked it for a json response and it generates 2 or 3 at a time. What's with this.
Looks like some of the docs are generated by an llm. I see pictures with typos and imagined terms, incomplete texts etc., I wonder to what extent we can trust rest of the docs.
https://github.com/microsoft/Phi-3CookBook/blob/main/md/04.F...