Show HN: I made an Ollama summarizer for Firefox

by tcsenpaion 10/11/2024, 3:45 PMwith 33 comments

Source: https://github.com/tcsenpai/spacellama

by RicoElectricoon 10/11/2024, 7:20 PM

I've found that for the most part the articles that I want summarized are those which only fit the largest context models such as Claude. Because otherwise I can skim-read the article possibly in reader mode for legibility.

Is llama 2 a good fit considering its small context window?

by asdevon 10/12/2024, 1:05 AM

I built a chrome version of this for summarizing HN comments: https://github.com/built-by-as/FastDigest

by chxon 10/11/2024, 10:07 PM

Help me understand why people are using these.

I presume you want information of some value to you otherwise you wouldn't bother reading an article. Then you feed it to a probabilistic algorithm and so you can not have any idea what the output has to do with the input. Like https://i.imgur.com/n6hFwVv.png you can somewhat decipher what this slop wants to be but what if the summary leaves out or invents or inverts some crucial piece of info?

by tcsenpaion 10/13/2024, 4:55 PM

Update: v 1.1 is out!

- # Changelog

## [1.1] - 2024-03-19

### Added - New `model_tokens.json` file containing token limits for various Ollama models. - Dynamic token limit updating based on selected model in options. - Automatic loading of model-specific token limits from `model_tokens.json`. - Chunking and recursive summary for long pages - Better handling of markdown returns

### Changed - Updated `manifest.json` to include `model_tokens.json` as a web accessible resource. - Modified `options.js` to handle dynamic token limit updates: - Added `loadModelTokens()` function to fetch model token data. - Added `updateTokenLimit()` function to update token limit based on selected model. - Updated `restoreOptions()` function to incorporate dynamic token limit updating. - Added event listener for model selection changes.

### Improved - User experience in options page with automatic token limit updates. - Flexibility in handling different models and their respective token limits.

### Fixed - Potential issues with incorrect token limits for different models.

by oneshteinon 10/12/2024, 3:44 AM

I use PageAssist with Ollama for two months, but I never called "Summarise" option in menu. :-/

by donclarkon 10/11/2024, 7:31 PM

If we can get this as the default for all the newly posted HN articles please and thank you?