<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Tooling |</title><link>https://gerrfarr.github.io/tags/tooling/</link><atom:link href="https://gerrfarr.github.io/tags/tooling/index.xml" rel="self" type="application/rss+xml"/><description>Tooling</description><generator>HugoBlox Kit (https://hugoblox.com)</generator><language>en-us</language><lastBuildDate>Wed, 01 Jan 2025 00:00:00 +0000</lastBuildDate><item><title>CLArXivR</title><link>https://gerrfarr.github.io/projects/clarxivr/</link><pubDate>Wed, 01 Jan 2025 00:00:00 +0000</pubDate><guid>https://gerrfarr.github.io/projects/clarxivr/</guid><description>&lt;p&gt;In some fields the arXiv now sees hundreds of new preprints every day. No working researcher can read all of them, and most existing tools stop at recommendation — surfacing a ranked list of papers that look superficially related to what you&amp;rsquo;ve read before. That helps with discovery, but it doesn&amp;rsquo;t solve the harder problem: actually reading enough of each paper to know whether and how it matters for &lt;em&gt;your&lt;/em&gt; work.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;CLArXivR&lt;/strong&gt; adds a comprehension layer on top of recommendation. For each new preprint, the system reads the paper in the context of the researcher&amp;rsquo;s own work and produces a contextual summary that answers questions an expert actually cares about — &lt;em&gt;does this overlap with what I&amp;rsquo;m doing? does it build on it, contradict it, or extend it in a direction worth following up?&lt;/em&gt; The goal is to skip past generic abstracts and deliver the kind of read you&amp;rsquo;d want from a thoughtful colleague who knows your work.&lt;/p&gt;
&lt;p&gt;Once you commit to summarising at that level of detail, the load-bearing properties shift. &lt;strong&gt;Faithfulness&lt;/strong&gt; (the system doesn&amp;rsquo;t say things the paper didn&amp;rsquo;t) and &lt;strong&gt;relevance&lt;/strong&gt; (the summary highlights what &lt;em&gt;this&lt;/em&gt; researcher needs, not generic findings) become the things to design and engineer around — and most of the technical effort goes into the evaluation frameworks needed to measure both, and the iteration loops that let them improve together rather than at each other&amp;rsquo;s expense.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Coming soon&lt;/strong&gt; — currently in private alpha. If you work in a field with a high preprint volume and would like early access,
.&lt;/p&gt;</description></item></channel></rss>