NHacker Next
login
▲Text-to-LoRA: Hypernetwork that generates task-specific LLM adapters (LoRAs)github.com
105 points by dvrp 4 days ago | 10 comments
Loading comments...
phildini 7 hours ago [-]
I got very briefly excited that this might be a new application layer on top of meshtastic.
smcleod 3 hours ago [-]
Out of interest, why does it depend on or at least recommend such an old version of Python? (3.10)
jph00 6 hours ago [-]
The paper link on that site doesn't work -- here's a working link:

https://arxiv.org/abs/2506.06105

etaioinshrdlu 2 hours ago [-]
What is such a thing good for?
watkinss 5 hours ago [-]
Interesting work to adapt LoRa adapters. Similar idea applied to VLMs: https://arxiv.org/abs/2412.16777
npollock 6 hours ago [-]
LoRA adapters modify the model's internal weights
jsight 3 hours ago [-]
Yeah, I honestly think some of the language used with LoRA gets in the way of people understanding them. It becomes much easier to understand when looking at an actual implementation, as well as how they can be merged or kept separate.
make3 5 hours ago [-]
not unless they're explicitly merged, which is not a requirement but a small speed only thing
gdiamos 8 hours ago [-]
An alternative to prefix caching?
normal01081975 2 hours ago [-]
[dead]
dvrp 4 days ago [-]
[flagged]
15 hours ago [-]
vessenes 11 hours ago [-]
Sounds like a good candidate for an mcp tool!