Tag: LLMs

16-Bit to 1-Bit: Visual KV Cache Quantization for Efficient Multimodal LLMs

Article URL: https://arxiv.org/abs/2502.14882 Comments URL: https://news.ycombinator.com/item?id=43268477 Points: 1 # Comments: 0 Source

Klenance Klenance

Training LLMs with Order-Centric Augmentation

arXivLabs is a framework that allows collaborators to develop and share new

Klenance Klenance

Beyond Words: A Latent Memory Approach to Internal Reasoning in LLMs

Article URL: https://arxiv.org/abs/2502.21030 Comments URL: https://news.ycombinator.com/item?id=43260353 Points: 1 # Comments: 0 Source

Klenance Klenance

Ask HN: Why can’t we have LLMs writing documentation?

Ask HN: Why can't we have LLMs writing documentation? 1 point by

Klenance Klenance

Yuyz0112/claude-code-reverse: Reverse Engineering Claude Code with LLMs: A Deep Dive into the Minified 4.6MB cli.mjs

中文版 After Anthropic released Claude Code, I immediately wanted to test this

Klenance Klenance

Ask HN: Why aren’t LLMs used for email spam detection?

Ask HN: Why aren't LLMs used for email spam detection? 2 points

Klenance Klenance

GitHub – google/langfun: OO for LLMs

Installation | Getting started | Tutorial | Discord community Langfun is a

Klenance Klenance

Engineering the Mindmap Generator: Marshalling LLMs for Hierarchical Document Analysis

Introduction In the crowded space of LLM applications (referred to in a

Klenance Klenance

Writing tests with AI, but not LLMs

February 24, 2025Animesh Mishra, senior solutions engineer at Diffblue, joins Ryan and

Klenance Klenance