<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="https://rsasaki0109.github.io/rsasaki0109-tweet-summaries/feed.xml" rel="self" type="application/atom+xml" /><link href="https://rsasaki0109.github.io/rsasaki0109-tweet-summaries/" rel="alternate" type="text/html" /><updated>2026-03-24T06:28:51+00:00</updated><id>https://rsasaki0109.github.io/rsasaki0109-tweet-summaries/feed.xml</id><title type="html">@rsasaki0109 Tweet Summaries</title><subtitle>ロボティクス・自動運転・3D再構成・VLA関連の月次ポストまとめ</subtitle><author><name>rsasaki0109</name></author><entry><title type="html">2026年3月のポストまとめ</title><link href="https://rsasaki0109.github.io/rsasaki0109-tweet-summaries/2026/03/01/2026-03-summary/" rel="alternate" type="text/html" title="2026年3月のポストまとめ" /><published>2026-03-01T00:00:00+00:00</published><updated>2026-03-01T00:00:00+00:00</updated><id>https://rsasaki0109.github.io/rsasaki0109-tweet-summaries/2026/03/01/2026-03-summary</id><content type="html" xml:base="https://rsasaki0109.github.io/rsasaki0109-tweet-summaries/2026/03/01/2026-03-summary/"><![CDATA[<h2 id="-概要">📊 概要</h2>

<p><strong>35件</strong>のポスト（リプライ除く）を7カテゴリに分類しました。</p>

<table>
  <thead>
    <tr>
      <th style="text-align: left">カテゴリ</th>
      <th style="text-align: right">件数</th>
      <th style="text-align: right">割合</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td style="text-align: left">🏗️ 3D再構成・SLAM</td>
      <td style="text-align: right">11</td>
      <td style="text-align: right">██████████ 31%</td>
    </tr>
    <tr>
      <td style="text-align: left">🚗 自動運転</td>
      <td style="text-align: right">3</td>
      <td style="text-align: right">███ 9%</td>
    </tr>
    <tr>
      <td style="text-align: left">🤖 ロボティクス</td>
      <td style="text-align: right">1</td>
      <td style="text-align: right">█ 3%</td>
    </tr>
    <tr>
      <td style="text-align: left">🧠 VLA・Foundation Model</td>
      <td style="text-align: right">3</td>
      <td style="text-align: right">███ 9%</td>
    </tr>
    <tr>
      <td style="text-align: left">📄 論文紹介</td>
      <td style="text-align: right">2</td>
      <td style="text-align: right">██ 6%</td>
    </tr>
    <tr>
      <td style="text-align: left">🔧 OSS・ツール</td>
      <td style="text-align: right">4</td>
      <td style="text-align: right">████ 11%</td>
    </tr>
    <tr>
      <td style="text-align: left">💬 その他</td>
      <td style="text-align: right">11</td>
      <td style="text-align: right">██████████ 31%</td>
    </tr>
  </tbody>
</table>

<h2 id="-人気トップ3">🏆 人気トップ3</h2>

<h3 id="-1位">🥇 1位</h3>

<p><img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2026-03/2031203389657288759.jpg" alt="tweet image" style="max-width:100%; border-radius:8px; margin-bottom:12px;" /></p>

<table>
  <thead>
    <tr>
      <th style="text-align: left"> </th>
      <th style="text-align: left"> </th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td style="text-align: left"><strong>RT</strong></td>
      <td style="text-align: left">76</td>
    </tr>
    <tr>
      <td style="text-align: left"><strong>Like</strong></td>
      <td style="text-align: left">484</td>
    </tr>
    <tr>
      <td style="text-align: left"><strong>Views</strong></td>
      <td style="text-align: left">21000</td>
    </tr>
  </tbody>
</table>

<blockquote>
  <p>Describe Anything, Anywhere, at Any Moment DAAAM builds a hierarchical 4D scene graph as spatio-temporal memory, enabling embodied agents to describe anything, anywhere, at any moment.</p>

  <p>🔗 <a href="https://x.com/rsasaki0109/status/2031203389657288759">ポストを見る</a></p>
</blockquote>

<hr />

<h3 id="-2位">🥈 2位</h3>

<p><img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2026-03/2029029061637640211.jpg" alt="tweet image" style="max-width:100%; border-radius:8px; margin-bottom:12px;" /></p>

<table>
  <thead>
    <tr>
      <th style="text-align: left"> </th>
      <th style="text-align: left"> </th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td style="text-align: left"><strong>RT</strong></td>
      <td style="text-align: left">31</td>
    </tr>
    <tr>
      <td style="text-align: left"><strong>Like</strong></td>
      <td style="text-align: left">240</td>
    </tr>
    <tr>
      <td style="text-align: left"><strong>Views</strong></td>
      <td style="text-align: left">14000</td>
    </tr>
  </tbody>
</table>

<blockquote>
  <p>supersplat 3D Gaussian Splat Editor The SuperSplat Editor is a free and open source tool for inspecting, editing, optimizing and publishing 3D Gaussian Splats. It is built on web technologies and runs in the browser, so there’s nothing to download or</p>

  <p>🔗 <a href="https://x.com/rsasaki0109/status/2029029061637640211">ポストを見る</a></p>
</blockquote>

<hr />

<h3 id="-3位">🥉 3位</h3>

<table>
  <thead>
    <tr>
      <th style="text-align: left"> </th>
      <th style="text-align: left"> </th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td style="text-align: left"><strong>RT</strong></td>
      <td style="text-align: left">31</td>
    </tr>
    <tr>
      <td style="text-align: left"><strong>Like</strong></td>
      <td style="text-align: left">214</td>
    </tr>
    <tr>
      <td style="text-align: left"><strong>Views</strong></td>
      <td style="text-align: left">10000</td>
    </tr>
  </tbody>
</table>

<blockquote>
  <p>SiMpLE A simple LiDAR odometry method reducing the complexity and configuration burden of localization algorithms.</p>

  <p>🔗 <a href="https://x.com/rsasaki0109/status/2034464879567073567">ポストを見る</a></p>
</blockquote>

<hr />

<h2 id="-カテゴリ別ハイライト">📂 カテゴリ別ハイライト</h2>

<h3 id="️-3d再構成slam">🏗️ 3D再構成・SLAM</h3>
<p><img src="/assets/images/cat-3d-slam.svg" alt="3D再構成・SLAM" class="align-left" style="width:40px; margin-right:10px;" /></p>

<p><strong>11件</strong>のポスト</p>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2026-03/2029029061637640211.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/2029029061637640211">supersplat 3D Gaussian Splat Editor The SuperSplat Editor is a free and open source tool for inspecting, editing, optimizing and publishing 3D Gaussia...</a> (♥ 240)
</div></div>

<ul>
  <li><a href="https://x.com/rsasaki0109/status/2034464879567073567">SiMpLE A simple LiDAR odometry method reducing the complexity and configuration burden of localization algorithms.</a> (♥ 214)</li>
</ul>
<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2026-03/2035617877303361995.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/2035617877303361995">LEGO-SLAM: Language-Embedded Gaussian Optimization SLAM LEGO-SLAM running at 15 FPS on a ScanNet scene with language-based loop closing for drift corr...</a> (♥ 208)
</div></div>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2026-03/2029336923668672962.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/2029336923668672962">maplibre-gl-lidar A MapLibre GL JS plugin for visualizing LiDAR point clouds using deck gl. Features - Load and visualize LAS/LAZ/COPC point cloud fil...</a> (♥ 184)
</div></div>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2026-03/2030796184777035872.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/2030796184777035872">Global-LVBA Global LiDAR-Visual Bundle Adjustment Global-LVBA is a globally consistent LiDAR–Visual Bundle Adjustment system for refinement after LiDA...</a> (♥ 123)
</div></div>

<h3 id="-自動運転">🚗 自動運転</h3>
<p><img src="/assets/images/cat-autonomous.svg" alt="自動運転" class="align-left" style="width:40px; margin-right:10px;" /></p>

<p><strong>3件</strong>のポスト</p>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2026-03/2033377728037048770.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/2033377728037048770">SfmPanOcc Vision-Based Panoptic Occupancy Prediction in Urban Environments</a> (♥ 86)
</div></div>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2026-03/2028666678423507142.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/2028666678423507142">[ICRA 2026] StepNav: Efficient Planning with Structured Trajectory Priors We present StepNav, an efficient planning framework for visual navigation th...</a> (♥ 78)
</div></div>

<ul>
  <li><a href="https://x.com/rsasaki0109/status/2030036907581993274">drawtonomy Whiteboard for Driving Diagrams 🚗 Intuitively place lanes, vehicles, pedestrians, and traffic lights. Browser-based. For autonomous driving…</a> (♥ 19)</li>
</ul>

<h3 id="-ロボティクス">🤖 ロボティクス</h3>
<p><img src="/assets/images/cat-robotics.svg" alt="ロボティクス" class="align-left" style="width:40px; margin-right:10px;" /></p>

<p><strong>1件</strong>のポスト</p>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2026-03/2031324190897484124.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/2031324190897484124">rust_roboticsもgithub ioで紹介ページ作った rust_robotics Rust implementation of PythonRobotics, sample codes for robotics algorithms</a> (♥ 66)
</div></div>

<h3 id="-vlafoundation-model">🧠 VLA・Foundation Model</h3>
<p><img src="/assets/images/cat-vla.svg" alt="VLA・Foundation Model" class="align-left" style="width:40px; margin-right:10px;" /></p>

<p><strong>3件</strong>のポスト</p>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2026-03/2031203389657288759.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/2031203389657288759">Describe Anything, Anywhere, at Any Moment DAAAM builds a hierarchical 4D scene graph as spatio-temporal memory, enabling embodied agents to describe ...</a> (♥ 484)
</div></div>

<ul>
  <li><a href="https://x.com/rsasaki0109/status/2034102501159670208">ACG: Action Coherence Guidance for Flow-based Vision-Language-Action Models (ICRA 2026) Action Coherence Guidance (ACG) is a training-free, test-time …</a> (♥ 120)</li>
  <li><a href="https://x.com/rsasaki0109/status/2031928171872870560">VLAExplain — Interpreting Vision-Language-Action (VLA) Models VLAExplain is an interpretability toolkit designed to help users visually understand the…</a> (♥ 112)</li>
</ul>

<h3 id="-論文紹介">📄 論文紹介</h3>
<p><img src="/assets/images/cat-paper.svg" alt="論文紹介" class="align-left" style="width:40px; margin-right:10px;" /></p>

<p><strong>2件</strong>のポスト</p>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2026-03/2035867607988150599.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/2035867607988150599">City2Graph: GeoAI with Graph Neural Networks (GNNs) and Spatial Network Analysis -Graph Construction for GeoAI: Build graphs from diverse urban datase...</a> (♥ 58)
</div></div>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2026-03/2029705115175903471.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/2029705115175903471">Do Visual Imaginations Improve Vision-and-Language Navigation Agents? Vision-and-Language Navigation (VLN) agents are tasked with navigating an unseen...</a> (♥ 26)
</div></div>

<h3 id="-ossツール">🔧 OSS・ツール</h3>
<p><img src="/assets/images/cat-oss.svg" alt="OSS・ツール" class="align-left" style="width:40px; margin-right:10px;" /></p>

<p><strong>4件</strong>のポスト</p>

<ul>
  <li><a href="https://x.com/rsasaki0109/status/2034112491257778563">今のロボティクス系ossの新潮流ってなんやろ</a> (♥ 19)</li>
  <li><a href="https://x.com/rsasaki0109/status/2034420174561100109">もうちょい世間のオープンソース活動を推進したくなってきたな</a> (♥ 12)</li>
</ul>
<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2026-03/2034274865742815583.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/2034274865742815583">初めてGithubで詐欺メンションが来た 最近各種SNSでこういうの多いなぁ</a> (♥ 3)
</div></div>

<ul>
  <li><a href="https://x.com/rsasaki0109/status/2034466132086964539">もうちょっとoss紹介にひとひねりいれるかぁ</a> (♥ 2)</li>
</ul>

<h3 id="-その他">💬 その他</h3>

<p><strong>11件</strong>のポスト</p>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2026-03/2029134762850304174.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/2029134762850304174">自分のHomepageを作った。</a> (♥ 34)
</div></div>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2026-03/2035110776432992433.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/2035110776432992433">MapTrace: A 2M-Sample Synthetic Dataset for Wayfinding Path Tracing This repository contains the dataset and code for MapTrace, a large-scale syntheti...</a> (♥ 20)
</div></div>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2026-03/2035166367998255580.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/2035166367998255580">←AIコーディングしてる時の自分の自認 →強強プログラマーのAIコーディング</a> (♥ 17)
</div></div>

<ul>
  <li><a href="https://x.com/rsasaki0109/status/2034285403851964786">現代の入社における技術試験ってコーディングテストよりも設計テストの方が良くないと思ったけど、評価が難しいからすぐには導入できないか そこらへんの評価をある程度自動化するAIを作ったら売れそう</a> (♥ 8)</li>
  <li><a href="https://x.com/rsasaki0109/status/2029148668092424567">おおー 👀 三菱電機が50億円出資したAIスタートアップ「燈」CEOを直撃！東大・松尾研発の「隠れユニコーン」の正体とは？創業期から黒字、エンジニアの4割が東大出身、原則フル出社《再配信》 2026年3月3日</a> (♥ 6)</li>
</ul>

<h2 id="-全ポスト一覧">📋 全ポスト一覧</h2>

<details>
<summary>全35件を表示</summary>

| # | ポスト | RT | ♥ | 日付 |
|--:|:-------|---:|--:|:-----|
| 1 | [Describe Anything, Anywhere, at Any Moment DAAAM builds a hierarchical 4D scene ...](https://x.com/rsasaki0109/status/2031203389657288759) | 76 | 484 | 2026-03-10 |
| 2 | [supersplat 3D Gaussian Splat Editor The SuperSplat Editor is a free and open sou...](https://x.com/rsasaki0109/status/2029029061637640211) | 31 | 240 | 2026-03-04 |
| 3 | [SiMpLE A simple LiDAR odometry method reducing the complexity and configuration ...](https://x.com/rsasaki0109/status/2034464879567073567) | 31 | 214 | 2026-03-19 |
| 4 | [LEGO-SLAM: Language-Embedded Gaussian Optimization SLAM LEGO-SLAM running at 15 ...](https://x.com/rsasaki0109/status/2035617877303361995) | 34 | 208 | 2026-03-22 |
| 5 | [maplibre-gl-lidar A MapLibre GL JS plugin for visualizing LiDAR point clouds usi...](https://x.com/rsasaki0109/status/2029336923668672962) | 26 | 184 | 2026-03-04 |
| 6 | [Global-LVBA Global LiDAR-Visual Bundle Adjustment Global-LVBA is a globally cons...](https://x.com/rsasaki0109/status/2030796184777035872) | 17 | 123 | 2026-03-09 |
| 7 | [ACG: Action Coherence Guidance for Flow-based Vision-Language-Action Models (ICR...](https://x.com/rsasaki0109/status/2034102501159670208) | 17 | 120 | 2026-03-18 |
| 8 | [[ICLR 2026] Continuous Space-Time Video Super-Resolution with 3D Fourier Fields ...](https://x.com/rsasaki0109/status/2030464475460215290) | 15 | 117 | 2026-03-08 |
| 9 | [Utonia: Toward One Encoder for All Point Clouds TL;DR: This repo provide cross-d...](https://x.com/rsasaki0109/status/2032620945761071480) | 15 | 112 | 2026-03-14 |
| 10 | [VLAExplain — Interpreting Vision-Language-Action (VLA) Models VLAExplain is an i...](https://x.com/rsasaki0109/status/2031928171872870560) | 15 | 112 | 2026-03-12 |
| 11 | [BDM Memory-Efficient Boundary Map for Large-Scale Occupancy Grid Mapping (IJRR 2...](https://x.com/rsasaki0109/status/2034810831201042886) | 11 | 102 | 2026-03-20 |
| 12 | [SfmPanOcc Vision-Based Panoptic Occupancy Prediction in Urban Environments](https://x.com/rsasaki0109/status/2033377728037048770) | 11 | 86 | 2026-03-16 |
| 13 | [[ICRA 2026] StepNav: Efficient Planning with Structured Trajectory Priors We pre...](https://x.com/rsasaki0109/status/2028666678423507142) | 17 | 78 | 2026-03-03 |
| 14 | [DGGT: Feedforward 4D Reconstruction of Dynamic Driving Scenes using Unposed Imag...](https://x.com/rsasaki0109/status/2032290558677241925) | 9 | 73 | 2026-03-13 |
| 15 | [rust_roboticsもgithub ioで紹介ページ作った rust_robotics Rust implementation of PythonRobo...](https://x.com/rsasaki0109/status/2031324190897484124) | 4 | 66 | 2026-03-10 |
| 16 | [City2Graph: GeoAI with Graph Neural Networks (GNNs) and Spatial Network Analysis...](https://x.com/rsasaki0109/status/2035867607988150599) | 9 | 58 | 2026-03-22 |
| 17 | [LoGeR: Long-Context Geometric Reconstruction with Hybrid Memory LoGeR scales fee...](https://x.com/rsasaki0109/status/2031565785785840101) | 8 | 42 | 2026-03-11 |
| 18 | [自分のHomepageを作った。](https://x.com/rsasaki0109/status/2029134762850304174) | 0 | 34 | 2026-03-04 |
| 19 | [Do Visual Imaginations Improve Vision-and-Language Navigation Agents? Vision-and...](https://x.com/rsasaki0109/status/2029705115175903471) | 2 | 26 | 2026-03-05 |
| 20 | [MapTrace: A 2M-Sample Synthetic Dataset for Wayfinding Path Tracing This reposit...](https://x.com/rsasaki0109/status/2035110776432992433) | 4 | 20 | 2026-03-20 |
| 21 | [今のロボティクス系ossの新潮流ってなんやろ](https://x.com/rsasaki0109/status/2034112491257778563) | 0 | 19 | 2026-03-18 |
| 22 | [drawtonomy Whiteboard for Driving Diagrams 🚗 Intuitively place lanes, vehicles, ...](https://x.com/rsasaki0109/status/2030036907581993274) | 6 | 19 | 2026-03-06 |
| 23 | [←AIコーディングしてる時の自分の自認 →強強プログラマーのAIコーディング](https://x.com/rsasaki0109/status/2035166367998255580) | 3 | 17 | 2026-03-21 |
| 24 | [localizationとperceptionかplaningを融合させたなにかを作りたい](https://x.com/rsasaki0109/status/2035873522304930287) | 4 | 13 | 2026-03-23 |
| 25 | [もうちょい世間のオープンソース活動を推進したくなってきたな](https://x.com/rsasaki0109/status/2034420174561100109) | 0 | 12 | 2026-03-19 |
| 26 | [現代の入社における技術試験ってコーディングテストよりも設計テストの方が良くないと思ったけど、評価が難しいからすぐには導入できないか そこらへんの評価をある程度自...](https://x.com/rsasaki0109/status/2034285403851964786) | 1 | 8 | 2026-03-18 |
| 27 | [おおー 👀 三菱電機が50億円出資したAIスタートアップ「燈」CEOを直撃！東大・松尾研発の「隠れユニコーン」の正体とは？創業期から黒字、エンジニアの4割が東大...](https://x.com/rsasaki0109/status/2029148668092424567) | 0 | 6 | 2026-03-04 |
| 28 | [趣味でrtklib相当のプログラムを自作してるんだけどむずすぎる。よくこんなの一人で作ったな高須神。](https://x.com/rsasaki0109/status/2034282174912840037) | 1 | 5 | 2026-03-18 |
| 29 | [最近フィッシングのDMがキたんだけど、レベルが高くなっててすごい この垢はすでに凍結済](https://x.com/rsasaki0109/status/2032396258279842155) | 1 | 4 | 2026-03-13 |
| 30 | [朝から晩まで無限にプログラム作ってる](https://x.com/rsasaki0109/status/2034750272774578417) | 0 | 3 | 2026-03-19 |
| 31 | [初めてGithubで詐欺メンションが来た 最近各種SNSでこういうの多いなぁ](https://x.com/rsasaki0109/status/2034274865742815583) | 0 | 3 | 2026-03-18 |
| 32 | [もうちょっとoss紹介にひとひねりいれるかぁ](https://x.com/rsasaki0109/status/2034466132086964539) | 0 | 2 | 2026-03-19 |
| 33 | [AIによる認知負荷がエグいのでなんとかしたい](https://x.com/rsasaki0109/status/2029395758760562696) | 0 | 2 | 2026-03-05 |
| 34 | [自分のツイートってジャンルが広いのでフォローしてくれても離脱率が大きい気がしていて、なんらかの方法でジャンルを分けて見れるようにすると良い気がしてきた](https://x.com/rsasaki0109/status/2034103396555493390) | 0 | 0 | 2026-03-18 |
| 35 | [webベースのpointcloud viewerといえばでportreeを見に行ったら開発が停滞してた なんか他にいいのないかな](https://x.com/rsasaki0109/status/2028775194257764616) | 0 | 0 | 2026-03-03 |

</details>]]></content><author><name>rsasaki0109</name></author><category term="monthly-summary" /><summary type="html"><![CDATA[35件のポスト｜3D再構成・SLAM (11件), 自動運転 (3件), ロボティクス (1件), VLA・Foundation Model (3件), 論文紹介 (2件), OSS・ツール (4件), その他 (11件)｜1位: Describe Anything, Anywhere, at Any Moment DAAAM builds a hierarchical 4D scene ...]]></summary></entry><entry><title type="html">2026年2月のポストまとめ</title><link href="https://rsasaki0109.github.io/rsasaki0109-tweet-summaries/2026/02/01/2026-02-summary/" rel="alternate" type="text/html" title="2026年2月のポストまとめ" /><published>2026-02-01T00:00:00+00:00</published><updated>2026-02-01T00:00:00+00:00</updated><id>https://rsasaki0109.github.io/rsasaki0109-tweet-summaries/2026/02/01/2026-02-summary</id><content type="html" xml:base="https://rsasaki0109.github.io/rsasaki0109-tweet-summaries/2026/02/01/2026-02-summary/"><![CDATA[<h2 id="-概要">📊 概要</h2>

<p><strong>20件</strong>のポスト（リプライ除く）を5カテゴリに分類しました。</p>

<table>
  <thead>
    <tr>
      <th style="text-align: left">カテゴリ</th>
      <th style="text-align: right">件数</th>
      <th style="text-align: right">割合</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td style="text-align: left">🏗️ 3D再構成・SLAM</td>
      <td style="text-align: right">8</td>
      <td style="text-align: right">██████████ 40%</td>
    </tr>
    <tr>
      <td style="text-align: left">🤖 ロボティクス</td>
      <td style="text-align: right">1</td>
      <td style="text-align: right">█ 5%</td>
    </tr>
    <tr>
      <td style="text-align: left">🧠 VLA・Foundation Model</td>
      <td style="text-align: right">5</td>
      <td style="text-align: right">██████ 25%</td>
    </tr>
    <tr>
      <td style="text-align: left">📄 論文紹介</td>
      <td style="text-align: right">3</td>
      <td style="text-align: right">████ 15%</td>
    </tr>
    <tr>
      <td style="text-align: left">💬 その他</td>
      <td style="text-align: right">3</td>
      <td style="text-align: right">████ 15%</td>
    </tr>
  </tbody>
</table>

<h2 id="-人気トップ3">🏆 人気トップ3</h2>

<h3 id="-1位">🥇 1位</h3>

<table>
  <thead>
    <tr>
      <th style="text-align: left"> </th>
      <th style="text-align: left"> </th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td style="text-align: left"><strong>RT</strong></td>
      <td style="text-align: left">108</td>
    </tr>
    <tr>
      <td style="text-align: left"><strong>Like</strong></td>
      <td style="text-align: left">852</td>
    </tr>
    <tr>
      <td style="text-align: left"><strong>Views</strong></td>
      <td style="text-align: left">221000</td>
    </tr>
  </tbody>
</table>

<blockquote>
  <p>EgoX: Egocentric Video Generation from a Single Exocentric Video</p>

  <p>🔗 <a href="https://x.com/rsasaki0109/status/2020996011565666582">ポストを見る</a></p>
</blockquote>

<hr />

<h3 id="-2位">🥈 2位</h3>

<p><img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2026-02/2021374303699337658.jpg" alt="tweet image" style="max-width:100%; border-radius:8px; margin-bottom:12px;" /></p>

<table>
  <thead>
    <tr>
      <th style="text-align: left"> </th>
      <th style="text-align: left"> </th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td style="text-align: left"><strong>RT</strong></td>
      <td style="text-align: left">66</td>
    </tr>
    <tr>
      <td style="text-align: left"><strong>Like</strong></td>
      <td style="text-align: left">468</td>
    </tr>
    <tr>
      <td style="text-align: left"><strong>Views</strong></td>
      <td style="text-align: left">20000</td>
    </tr>
  </tbody>
</table>

<blockquote>
  <p>insight-sam3 Uses SAM3 to transfer 2D semantics onto 3D point clouds, producing segmented training data and efficient scene graphs for indoor environments.</p>

  <p>🔗 <a href="https://x.com/rsasaki0109/status/2021374303699337658">ポストを見る</a></p>
</blockquote>

<hr />

<h3 id="-3位">🥉 3位</h3>

<table>
  <thead>
    <tr>
      <th style="text-align: left"> </th>
      <th style="text-align: left"> </th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td style="text-align: left"><strong>RT</strong></td>
      <td style="text-align: left">42</td>
    </tr>
    <tr>
      <td style="text-align: left"><strong>Like</strong></td>
      <td style="text-align: left">306</td>
    </tr>
    <tr>
      <td style="text-align: left"><strong>Views</strong></td>
      <td style="text-align: left">24000</td>
    </tr>
  </tbody>
</table>

<blockquote>
  <p>Paper2Rebuttal RebuttalAgent: AI-Powered Academic Paper Rebuttal Assistant RebuttalAgent is an AI-powered multi-agent system that helps researchers craft high-quality rebuttals for academic paper reviews. The system analyzes reviewer comments, search…</p>

  <p>🔗 <a href="https://x.com/rsasaki0109/status/2020652070773346346">ポストを見る</a></p>
</blockquote>

<hr />

<h2 id="-カテゴリ別ハイライト">📂 カテゴリ別ハイライト</h2>

<h3 id="️-3d再構成slam">🏗️ 3D再構成・SLAM</h3>
<p><img src="/assets/images/cat-3d-slam.svg" alt="3D再構成・SLAM" class="align-left" style="width:40px; margin-right:10px;" /></p>

<p><strong>8件</strong>のポスト</p>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2026-02/2021374303699337658.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/2021374303699337658">insight-sam3 Uses SAM3 to transfer 2D semantics onto 3D point clouds, producing segmented training data and efficient scene graphs for indoor environm...</a> (♥ 468)
</div></div>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2026-02/2024647229576036381.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/2024647229576036381">[ICLR 2026] FantasyWorld: Geometry-Consistent World Modeling via Unified Video and 3D Prediction FantasyWorld is a unified feed-forward model for join...</a> (♥ 235)
</div></div>

<ul>
  <li><a href="https://x.com/rsasaki0109/status/2027526834183962642">Pixie: Physics from Pixels Feed-forward model for predicting 3D physics with 3DGS + NeRF Photorealistic 3D reconstructions (NeRF, Gaussian Splatting) …</a> (♥ 227)</li>
</ul>
<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2026-02/2027217130501104043.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/2027217130501104043">[SIGGRAPH Asia 2025 - TOG] Official implementation of MILo: Mesh-In-the-Loop Gaussian Splatting for Detailed and Efficient Surface Reconstruction Our ...</a> (♥ 115)
</div></div>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2026-02/2027884754176221284.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/2027884754176221284">3D Diffusion Policy Generalizable Visuomotor Policy Learning via Simple 3D Representations 3D Diffusion Policy (DP3) is a universal visual imitation l...</a> (♥ 95)
</div></div>

<h3 id="-ロボティクス">🤖 ロボティクス</h3>
<p><img src="/assets/images/cat-robotics.svg" alt="ロボティクス" class="align-left" style="width:40px; margin-right:10px;" /></p>

<p><strong>1件</strong>のポスト</p>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2026-02/2021739517460484576.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/2021739517460484576">[NeurIPS 2025] Wan-Move: Motion-controllable Video Generation via Latent Trajectory Guidance We present our NeurIPS 2025 paper Wan-Move, a simple and ...</a> (♥ 73)
</div></div>

<h3 id="-vlafoundation-model">🧠 VLA・Foundation Model</h3>
<p><img src="/assets/images/cat-vla.svg" alt="VLA・Foundation Model" class="align-left" style="width:40px; margin-right:10px;" /></p>

<p><strong>5件</strong>のポスト</p>

<ul>
  <li><a href="https://x.com/rsasaki0109/status/2025035686186004547">ACG: Action Coherence Guidance for Flow-based VLA Models (ICRA 2026) Action Coherence Guidance (ACG) is a training-free, test-time guidance algorithm …</a> (♥ 132)</li>
</ul>
<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2026-02/2022509836391616661.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/2022509836391616661">PlantInstGen Tailored Refinement of Vision-Language Models for Plant Instance Segmentation</a> (♥ 58)
</div></div>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2026-02/2026854735853093038.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/2026854735853093038">VLG-Loc Vision-Language Global Localization (VLG-Loc) is a global localization method that uses camera images and a human-readable labeled footprint m...</a> (♥ 48)
</div></div>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2026-02/2026492348624941490.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/2026492348624941490">MindDrive: A Vision-Language-Action Model for Autonomous Driving Utilizing Language as Action in Online Reinforcement Learning Current Vision-Language...</a> (♥ 43)
</div></div>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2026-02/2023911985017127368.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/2023911985017127368">dVoting: Fast Voting for dLLMs Abstract Diffusion Large Language Models (dLLMs) represent a new paradigm beyond autoregressive modeling, offering comp...</a> (♥ 8)
</div></div>

<h3 id="-論文紹介">📄 論文紹介</h3>
<p><img src="/assets/images/cat-paper.svg" alt="論文紹介" class="align-left" style="width:40px; margin-right:10px;" /></p>

<p><strong>3件</strong>のポスト</p>

<ul>
  <li><a href="https://x.com/rsasaki0109/status/2020652070773346346">Paper2Rebuttal RebuttalAgent: AI-Powered Academic Paper Rebuttal Assistant RebuttalAgent is an AI-powered multi-agent system that helps researchers cr…</a> (♥ 306)</li>
  <li><a href="https://x.com/rsasaki0109/status/2022084642837508311">InstantSfM: Fully Sparse and Parallel Structure-from-Motion TLDR: InstantSfM is a fully sparse and parallel Structure-from-Motion pipeline that levera…</a> (♥ 212)</li>
</ul>
<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2026-02/2026136149572624816.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/2026136149572624816">SciencePlots Matplotlib styles for scientific plotting This repo has Matplotlib styles to format your figures for scientific papers, presentations and...</a> (♥ 21)
</div></div>

<h3 id="-その他">💬 その他</h3>

<p><strong>3件</strong>のポスト</p>

<ul>
  <li><a href="https://x.com/rsasaki0109/status/2020996011565666582">EgoX: Egocentric Video Generation from a Single Exocentric Video</a> (♥ 852)</li>
</ul>
<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2026-02/2024285743712260563.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/2024285743712260563">imap High-resolution map (OpenDrive\Apollo) visualization and conversion tools imap is a tool for visualize and convert format of the hd-map. This pro...</a> (♥ 17)
</div></div>

<ul>
  <li><a href="https://x.com/rsasaki0109/status/2024327124539363717">AIで並列タスクこなしてると、FFのアクティブタイムバトル（ATB）してる気分になる</a> (♥ 5)</li>
</ul>

<h2 id="-全ポスト一覧">📋 全ポスト一覧</h2>

<details>
<summary>全20件を表示</summary>

| # | ポスト | RT | ♥ | 日付 |
|--:|:-------|---:|--:|:-----|
| 1 | [EgoX: Egocentric Video Generation from a Single Exocentric Video](https://x.com/rsasaki0109/status/2020996011565666582) | 108 | 852 | 2026-02-09 |
| 2 | [insight-sam3 Uses SAM3 to transfer 2D semantics onto 3D point clouds, producing ...](https://x.com/rsasaki0109/status/2021374303699337658) | 66 | 468 | 2026-02-11 |
| 3 | [Paper2Rebuttal RebuttalAgent: AI-Powered Academic Paper Rebuttal Assistant Rebut...](https://x.com/rsasaki0109/status/2020652070773346346) | 42 | 306 | 2026-02-09 |
| 4 | [[ICLR 2026] FantasyWorld: Geometry-Consistent World Modeling via Unified Video a...](https://x.com/rsasaki0109/status/2024647229576036381) | 36 | 235 | 2026-02-20 |
| 5 | [Pixie: Physics from Pixels Feed-forward model for predicting 3D physics with 3DG...](https://x.com/rsasaki0109/status/2027526834183962642) | 33 | 227 | 2026-02-27 |
| 6 | [InstantSfM: Fully Sparse and Parallel Structure-from-Motion TLDR: InstantSfM is ...](https://x.com/rsasaki0109/status/2022084642837508311) | 31 | 212 | 2026-02-12 |
| 7 | [ACG: Action Coherence Guidance for Flow-based VLA Models (ICRA 2026) Action Cohe...](https://x.com/rsasaki0109/status/2025035686186004547) | 20 | 132 | 2026-02-21 |
| 8 | [[SIGGRAPH Asia 2025 - TOG] Official implementation of MILo: Mesh-In-the-Loop Gau...](https://x.com/rsasaki0109/status/2027217130501104043) | 17 | 115 | 2026-02-27 |
| 9 | [3D Diffusion Policy Generalizable Visuomotor Policy Learning via Simple 3D Repre...](https://x.com/rsasaki0109/status/2027884754176221284) | 11 | 95 | 2026-02-28 |
| 10 | [[NeurIPS 2025] Wan-Move: Motion-controllable Video Generation via Latent Traject...](https://x.com/rsasaki0109/status/2021739517460484576) | 9 | 73 | 2026-02-12 |
| 11 | [PlantInstGen Tailored Refinement of Vision-Language Models for Plant Instance Se...](https://x.com/rsasaki0109/status/2022509836391616661) | 9 | 58 | 2026-02-14 |
| 12 | [VLG-Loc Vision-Language Global Localization (VLG-Loc) is a global localization m...](https://x.com/rsasaki0109/status/2026854735853093038) | 9 | 48 | 2026-02-26 |
| 13 | [MindDrive: A Vision-Language-Action Model for Autonomous Driving Utilizing Langu...](https://x.com/rsasaki0109/status/2026492348624941490) | 8 | 43 | 2026-02-25 |
| 14 | [VIGA: Vision-as-Inverse-Graphics Agent via Interleaved Multimodal Reasoning VIGA...](https://x.com/rsasaki0109/status/2022816566468120824) | 10 | 37 | 2026-02-14 |
| 15 | [WHU-PCPR: A cross-platform heterogeneous point cloud dataset for place recogniti...](https://x.com/rsasaki0109/status/2023530712775930257) | 5 | 28 | 2026-02-16 |
| 16 | [SciencePlots Matplotlib styles for scientific plotting This repo has Matplotlib ...](https://x.com/rsasaki0109/status/2026136149572624816) | 1 | 21 | 2026-02-24 |
| 17 | [imap High-resolution map (OpenDrive\Apollo) visualization and conversion tools i...](https://x.com/rsasaki0109/status/2024285743712260563) | 4 | 17 | 2026-02-19 |
| 18 | [fusion4landslide [JAG 2026] The official implementation of the paper "Dense 3D d...](https://x.com/rsasaki0109/status/2023186796973293886) | 5 | 16 | 2026-02-16 |
| 19 | [dVoting: Fast Voting for dLLMs Abstract Diffusion Large Language Models (dLLMs) ...](https://x.com/rsasaki0109/status/2023911985017127368) | 6 | 8 | 2026-02-18 |
| 20 | [AIで並列タスクこなしてると、FFのアクティブタイムバトル（ATB）してる気分になる](https://x.com/rsasaki0109/status/2024327124539363717) | 0 | 5 | 2026-02-19 |

</details>]]></content><author><name>rsasaki0109</name></author><category term="monthly-summary" /><summary type="html"><![CDATA[20件のポスト｜3D再構成・SLAM (8件), ロボティクス (1件), VLA・Foundation Model (5件), 論文紹介 (3件), その他 (3件)｜1位: EgoX: Egocentric Video Generation from a Single Exocentric Video]]></summary></entry><entry><title type="html">2026年1月のポストまとめ</title><link href="https://rsasaki0109.github.io/rsasaki0109-tweet-summaries/2026/01/01/2026-01-summary/" rel="alternate" type="text/html" title="2026年1月のポストまとめ" /><published>2026-01-01T00:00:00+00:00</published><updated>2026-01-01T00:00:00+00:00</updated><id>https://rsasaki0109.github.io/rsasaki0109-tweet-summaries/2026/01/01/2026-01-summary</id><content type="html" xml:base="https://rsasaki0109.github.io/rsasaki0109-tweet-summaries/2026/01/01/2026-01-summary/"><![CDATA[<h2 id="-概要">📊 概要</h2>

<p><strong>20件</strong>のポスト（リプライ除く）を6カテゴリに分類しました。</p>

<table>
  <thead>
    <tr>
      <th style="text-align: left">カテゴリ</th>
      <th style="text-align: right">件数</th>
      <th style="text-align: right">割合</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td style="text-align: left">🏗️ 3D再構成・SLAM</td>
      <td style="text-align: right">10</td>
      <td style="text-align: right">██████████ 50%</td>
    </tr>
    <tr>
      <td style="text-align: left">🚗 自動運転</td>
      <td style="text-align: right">2</td>
      <td style="text-align: right">██ 10%</td>
    </tr>
    <tr>
      <td style="text-align: left">🤖 ロボティクス</td>
      <td style="text-align: right">3</td>
      <td style="text-align: right">███ 15%</td>
    </tr>
    <tr>
      <td style="text-align: left">🧠 VLA・Foundation Model</td>
      <td style="text-align: right">2</td>
      <td style="text-align: right">██ 10%</td>
    </tr>
    <tr>
      <td style="text-align: left">📄 論文紹介</td>
      <td style="text-align: right">1</td>
      <td style="text-align: right">█ 5%</td>
    </tr>
    <tr>
      <td style="text-align: left">💬 その他</td>
      <td style="text-align: right">2</td>
      <td style="text-align: right">██ 10%</td>
    </tr>
  </tbody>
</table>

<h2 id="-人気トップ3">🏆 人気トップ3</h2>

<h3 id="-1位">🥇 1位</h3>

<p><img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2026-01/2012833450739527818.jpg" alt="tweet image" style="max-width:100%; border-radius:8px; margin-bottom:12px;" /></p>

<table>
  <thead>
    <tr>
      <th style="text-align: left"> </th>
      <th style="text-align: left"> </th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td style="text-align: left"><strong>RT</strong></td>
      <td style="text-align: left">65</td>
    </tr>
    <tr>
      <td style="text-align: left"><strong>Like</strong></td>
      <td style="text-align: left">400</td>
    </tr>
    <tr>
      <td style="text-align: left"><strong>Views</strong></td>
      <td style="text-align: left">21000</td>
    </tr>
  </tbody>
</table>

<blockquote>
  <p>SpatialLLM：Enhancing Large Language Models for Urban Spatial Intelligence SpatialLLM is a comprehensive framework for enhancing Large Language Models with urban spatial understanding capabilities. This project integrates point cloud processing,</p>

  <p>🔗 <a href="https://x.com/rsasaki0109/status/2012833450739527818">ポストを見る</a></p>
</blockquote>

<hr />

<h3 id="-2位">🥈 2位</h3>

<p><img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2026-01/2017375637280022632.jpg" alt="tweet image" style="max-width:100%; border-radius:8px; margin-bottom:12px;" /></p>

<table>
  <thead>
    <tr>
      <th style="text-align: left"> </th>
      <th style="text-align: left"> </th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td style="text-align: left"><strong>RT</strong></td>
      <td style="text-align: left">43</td>
    </tr>
    <tr>
      <td style="text-align: left"><strong>Like</strong></td>
      <td style="text-align: left">276</td>
    </tr>
    <tr>
      <td style="text-align: left"><strong>Views</strong></td>
      <td style="text-align: left">11000</td>
    </tr>
  </tbody>
</table>

<blockquote>
  <p>OpenNavMap Structure-Free Topometric Mapping via Large-Scale Collaborative Localization OpenNavMap is a lightweight, structure-free topometric mapping system that enables large-scale collaborative localization across multiple sessions without requiri…</p>

  <p>🔗 <a href="https://x.com/rsasaki0109/status/2017375637280022632">ポストを見る</a></p>
</blockquote>

<hr />

<h3 id="-3位">🥉 3位</h3>

<p><img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2026-01/2014468174197240209.jpg" alt="tweet image" style="max-width:100%; border-radius:8px; margin-bottom:12px;" /></p>

<table>
  <thead>
    <tr>
      <th style="text-align: left"> </th>
      <th style="text-align: left"> </th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td style="text-align: left"><strong>RT</strong></td>
      <td style="text-align: left">34</td>
    </tr>
    <tr>
      <td style="text-align: left"><strong>Like</strong></td>
      <td style="text-align: left">250</td>
    </tr>
    <tr>
      <td style="text-align: left"><strong>Views</strong></td>
      <td style="text-align: left">15000</td>
    </tr>
  </tbody>
</table>

<blockquote>
  <p>openvps Open Visual Positioning Service</p>

  <p>🔗 <a href="https://x.com/rsasaki0109/status/2014468174197240209">ポストを見る</a></p>
</blockquote>

<hr />

<h2 id="-カテゴリ別ハイライト">📂 カテゴリ別ハイライト</h2>

<h3 id="️-3d再構成slam">🏗️ 3D再構成・SLAM</h3>
<p><img src="/assets/images/cat-3d-slam.svg" alt="3D再構成・SLAM" class="align-left" style="width:40px; margin-right:10px;" /></p>

<p><strong>10件</strong>のポスト</p>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2026-01/2017375637280022632.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/2017375637280022632">OpenNavMap Structure-Free Topometric Mapping via Large-Scale Collaborative Localization OpenNavMap is a lightweight, structure-free topometric mapping...</a> (♥ 276)
</div></div>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2026-01/2013083994993381690.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/2013083994993381690">Concerto [NeurIPS'25] Official repository of Concerto: Joint 2D-3D Self-Supervised Learning Emerges Spatial Representations TL;DR: This repo provide j...</a> (♥ 192)
</div></div>

<ul>
  <li><a href="https://x.com/rsasaki0109/status/2015884837606101174">CuMesh: High-Performance Geometry Processing for PyTorch Cuda mesh utils. CuMesh is a GPU-accelerated library designed for high-performance 3D geometr…</a> (♥ 183)</li>
</ul>
<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2026-01/2014836268849627262.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/2014836268849627262">IGGT: Instance-Grounded Geometry Transformer for Semantic 3D Reconstruction Humans naturally perceive both geometric structure and semantic content of...</a> (♥ 172)
</div></div>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2026-01/2013446394590167328.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/2013446394590167328">Unifi3D: A Study on 3D Representations for Generation and Reconstruction in a Common Framework Diffusion-based 3D generation pipelines share a common ...</a> (♥ 158)
</div></div>

<h3 id="-自動運転">🚗 自動運転</h3>
<p><img src="/assets/images/cat-autonomous.svg" alt="自動運転" class="align-left" style="width:40px; margin-right:10px;" /></p>

<p><strong>2件</strong>のポスト</p>

<ul>
  <li><a href="https://x.com/rsasaki0109/status/2015212275817472154">alpamayo-autoware Alpamayo ROS 2 Node Usage Guide</a> (♥ 174)</li>
  <li><a href="https://x.com/rsasaki0109/status/2014535919114846509">BEV</a> (♥ 2)</li>
</ul>

<h3 id="-ロボティクス">🤖 ロボティクス</h3>
<p><img src="/assets/images/cat-robotics.svg" alt="ロボティクス" class="align-left" style="width:40px; margin-right:10px;" /></p>

<p><strong>3件</strong>のポスト</p>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2026-01/2011634444130762809.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/2011634444130762809">Primitive_Planner Primitive-Swarm: An Ultra-lightweight and Scalable Planner for Large-scale Aerial Swarms</a> (♥ 77)
</div></div>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2026-01/2016310599618331038.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/2016310599618331038">Open Duck Mini v2 Making a mini version of the BDX droid &gt; We are making a miniature version of the BDX Droid by Disney. It is about 42 centimeters ta...</a> (♥ 33)
</div></div>

<ul>
  <li><a href="https://x.com/rsasaki0109/status/2011272068332208627">toio_gazebo This is a ROS 2 Package to develop package of toio using Gazebo.</a> (♥ 21)</li>
</ul>

<h3 id="-vlafoundation-model">🧠 VLA・Foundation Model</h3>
<p><img src="/assets/images/cat-vla.svg" alt="VLA・Foundation Model" class="align-left" style="width:40px; margin-right:10px;" /></p>

<p><strong>2件</strong>のポスト</p>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2026-01/2012833450739527818.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/2012833450739527818">SpatialLLM：Enhancing Large Language Models for Urban Spatial Intelligence SpatialLLM is a comprehensive framework for enhancing Large Language Models ...</a> (♥ 400)
</div></div>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2026-01/2013808777510740054.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/2013808777510740054">hn-time-capsule Analyzing Hacker News discussions from a decade ago in hindsight with LLMs A Hacker News time capsule project that pulls the HN frontp...</a> (♥ 3)
</div></div>

<h3 id="-論文紹介">📄 論文紹介</h3>
<p><img src="/assets/images/cat-paper.svg" alt="論文紹介" class="align-left" style="width:40px; margin-right:10px;" /></p>

<p><strong>1件</strong>のポスト</p>

<ul>
  <li><a href="https://x.com/rsasaki0109/status/2011996832285794711">[RA-L 26] Learning on the Fly: Rapid Policy Adaptation via Differentiable Simulation</a> (♥ 116)</li>
</ul>

<h3 id="-その他">💬 その他</h3>

<p><strong>2件</strong>のポスト</p>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2026-01/2014468174197240209.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/2014468174197240209">openvps Open Visual Positioning Service</a> (♥ 250)
</div></div>

<ul>
  <li><a href="https://x.com/rsasaki0109/status/2017430577423741091">メイカーフェアに興味が出てきた</a> (♥ 5)</li>
</ul>

<h2 id="-全ポスト一覧">📋 全ポスト一覧</h2>

<details>
<summary>全20件を表示</summary>

| # | ポスト | RT | ♥ | 日付 |
|--:|:-------|---:|--:|:-----|
| 1 | [SpatialLLM：Enhancing Large Language Models for Urban Spatial Intelligence Spatia...](https://x.com/rsasaki0109/status/2012833450739527818) | 65 | 400 | 2026-01-18 |
| 2 | [OpenNavMap Structure-Free Topometric Mapping via Large-Scale Collaborative Local...](https://x.com/rsasaki0109/status/2017375637280022632) | 43 | 276 | 2026-01-30 |
| 3 | [openvps Open Visual Positioning Service](https://x.com/rsasaki0109/status/2014468174197240209) | 34 | 250 | 2026-01-22 |
| 4 | [Concerto [NeurIPS'25] Official repository of Concerto: Joint 2D-3D Self-Supervis...](https://x.com/rsasaki0109/status/2013083994993381690) | 21 | 192 | 2026-01-19 |
| 5 | [CuMesh: High-Performance Geometry Processing for PyTorch Cuda mesh utils. CuMesh...](https://x.com/rsasaki0109/status/2015884837606101174) | 32 | 183 | 2026-01-26 |
| 6 | [alpamayo-autoware Alpamayo ROS 2 Node Usage Guide](https://x.com/rsasaki0109/status/2015212275817472154) | 29 | 174 | 2026-01-24 |
| 7 | [IGGT: Instance-Grounded Geometry Transformer for Semantic 3D Reconstruction Huma...](https://x.com/rsasaki0109/status/2014836268849627262) | 19 | 172 | 2026-01-23 |
| 8 | [Unifi3D: A Study on 3D Representations for Generation and Reconstruction in a Co...](https://x.com/rsasaki0109/status/2013446394590167328) | 21 | 158 | 2026-01-20 |
| 9 | [Enhanced SLAM3R: Real-Time Reconstruction via Online Camera Stream This is an en...](https://x.com/rsasaki0109/status/2012326592891195840) | 21 | 137 | 2026-01-17 |
| 10 | [RS-VIO Rust Stereo Visual-Inertial Odometry Features - Patch-based stereo featur...](https://x.com/rsasaki0109/status/2016707888308535303) | 17 | 136 | 2026-01-29 |
| 11 | [RoMa v2 🤖 : Harder Better Faster Denser Feature Matching](https://x.com/rsasaki0109/status/2014171171261239617) | 17 | 135 | 2026-01-22 |
| 12 | [[RA-L 26] Learning on the Fly: Rapid Policy Adaptation via Differentiable Simula...](https://x.com/rsasaki0109/status/2011996832285794711) | 13 | 116 | 2026-01-16 |
| 13 | [Speedy-Splat: Fast 3D Gaussian Splatting with Sparse Pixels and Sparse Primitive...](https://x.com/rsasaki0109/status/2015620720848691244) | 16 | 112 | 2026-01-26 |
| 14 | [Primitive_Planner Primitive-Swarm: An Ultra-lightweight and Scalable Planner for...](https://x.com/rsasaki0109/status/2011634444130762809) | 13 | 77 | 2026-01-15 |
| 15 | [FreeDOM FreeDOM is an online dynamic object removal framework for static map con...](https://x.com/rsasaki0109/status/2017032331547333006) | 9 | 70 | 2026-01-30 |
| 16 | [Open Duck Mini v2 Making a mini version of the BDX droid &gt; We are making a minia...](https://x.com/rsasaki0109/status/2016310599618331038) | 9 | 33 | 2026-01-28 |
| 17 | [toio_gazebo This is a ROS 2 Package to develop package of toio using Gazebo.](https://x.com/rsasaki0109/status/2011272068332208627) | 4 | 21 | 2026-01-14 |
| 18 | [メイカーフェアに興味が出てきた](https://x.com/rsasaki0109/status/2017430577423741091) | 0 | 5 | 2026-01-31 |
| 19 | [hn-time-capsule Analyzing Hacker News discussions from a decade ago in hindsight...](https://x.com/rsasaki0109/status/2013808777510740054) | 1 | 3 | 2026-01-21 |
| 20 | [BEV](https://x.com/rsasaki0109/status/2014535919114846509) | 0 | 2 | 2026-01-23 |

</details>]]></content><author><name>rsasaki0109</name></author><category term="monthly-summary" /><summary type="html"><![CDATA[20件のポスト｜3D再構成・SLAM (10件), 自動運転 (2件), ロボティクス (3件), VLA・Foundation Model (2件), 論文紹介 (1件), その他 (2件)｜1位: SpatialLLM：Enhancing Large Language Models for Urban Spatial Intelligence Spatia...]]></summary></entry><entry><title type="html">2025年12月のポストまとめ</title><link href="https://rsasaki0109.github.io/rsasaki0109-tweet-summaries/2025/12/01/2025-12-summary/" rel="alternate" type="text/html" title="2025年12月のポストまとめ" /><published>2025-12-01T00:00:00+00:00</published><updated>2025-12-01T00:00:00+00:00</updated><id>https://rsasaki0109.github.io/rsasaki0109-tweet-summaries/2025/12/01/2025-12-summary</id><content type="html" xml:base="https://rsasaki0109.github.io/rsasaki0109-tweet-summaries/2025/12/01/2025-12-summary/"><![CDATA[<h2 id="-概要">📊 概要</h2>

<p><strong>20件</strong>のポスト（リプライ除く）を6カテゴリに分類しました。</p>

<table>
  <thead>
    <tr>
      <th style="text-align: left">カテゴリ</th>
      <th style="text-align: right">件数</th>
      <th style="text-align: right">割合</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td style="text-align: left">🏗️ 3D再構成・SLAM</td>
      <td style="text-align: right">12</td>
      <td style="text-align: right">██████████ 60%</td>
    </tr>
    <tr>
      <td style="text-align: left">🚗 自動運転</td>
      <td style="text-align: right">1</td>
      <td style="text-align: right">█ 5%</td>
    </tr>
    <tr>
      <td style="text-align: left">🤖 ロボティクス</td>
      <td style="text-align: right">1</td>
      <td style="text-align: right">█ 5%</td>
    </tr>
    <tr>
      <td style="text-align: left">🧠 VLA・Foundation Model</td>
      <td style="text-align: right">2</td>
      <td style="text-align: right">██ 10%</td>
    </tr>
    <tr>
      <td style="text-align: left">🔧 OSS・ツール</td>
      <td style="text-align: right">1</td>
      <td style="text-align: right">█ 5%</td>
    </tr>
    <tr>
      <td style="text-align: left">💬 その他</td>
      <td style="text-align: right">3</td>
      <td style="text-align: right">██ 15%</td>
    </tr>
  </tbody>
</table>

<h2 id="-人気トップ3">🏆 人気トップ3</h2>

<h3 id="-1位">🥇 1位</h3>

<p><img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-12/2006289237696741658.jpg" alt="tweet image" style="max-width:100%; border-radius:8px; margin-bottom:12px;" /></p>

<table>
  <thead>
    <tr>
      <th style="text-align: left"> </th>
      <th style="text-align: left"> </th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td style="text-align: left"><strong>RT</strong></td>
      <td style="text-align: left">32</td>
    </tr>
    <tr>
      <td style="text-align: left"><strong>Like</strong></td>
      <td style="text-align: left">278</td>
    </tr>
    <tr>
      <td style="text-align: left"><strong>Views</strong></td>
      <td style="text-align: left">16000</td>
    </tr>
  </tbody>
</table>

<blockquote>
  <p>3D-RE-GEN 3D Reconstruction of Indoor Scenes with a Generative Framework We propose single-image 3D scene reconstruction for producing complete, editable scenes from a single photograph. Our method reconstructs individual objects and the surrounding</p>

  <p>🔗 <a href="https://x.com/rsasaki0109/status/2006289237696741658">ポストを見る</a></p>
</blockquote>

<hr />

<h3 id="-2位">🥈 2位</h3>

<p><img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-12/2000400424927395986.jpg" alt="tweet image" style="max-width:100%; border-radius:8px; margin-bottom:12px;" /></p>

<table>
  <thead>
    <tr>
      <th style="text-align: left"> </th>
      <th style="text-align: left"> </th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td style="text-align: left"><strong>RT</strong></td>
      <td style="text-align: left">30</td>
    </tr>
    <tr>
      <td style="text-align: left"><strong>Like</strong></td>
      <td style="text-align: left">178</td>
    </tr>
    <tr>
      <td style="text-align: left"><strong>Views</strong></td>
      <td style="text-align: left">11000</td>
    </tr>
  </tbody>
</table>

<blockquote>
  <p>OmniVGGT: Omni-Modality Driven Visual Geometry Grounded Transformer OmniVGGT is a spatial foundation model that can effectively benefit from an arbitrary number of auxiliary geometric modalities (depth, camera intrinsics and pose) to obtain high-qual…</p>

  <p>🔗 <a href="https://x.com/rsasaki0109/status/2000400424927395986">ポストを見る</a></p>
</blockquote>

<hr />

<h3 id="-3位">🥉 3位</h3>

<p><img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-12/2005484629047005480.jpg" alt="tweet image" style="max-width:100%; border-radius:8px; margin-bottom:12px;" /></p>

<table>
  <thead>
    <tr>
      <th style="text-align: left"> </th>
      <th style="text-align: left"> </th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td style="text-align: left"><strong>RT</strong></td>
      <td style="text-align: left">24</td>
    </tr>
    <tr>
      <td style="text-align: left"><strong>Like</strong></td>
      <td style="text-align: left">157</td>
    </tr>
    <tr>
      <td style="text-align: left"><strong>Views</strong></td>
      <td style="text-align: left">10000</td>
    </tr>
  </tbody>
</table>

<blockquote>
  <p>[ICCV 2025] InfiniCube: Unbounded and Controllable Dynamic 3D Driving Scene Generation with World-Guided Video Models InfiniCube take advantage of recent advances in 3D and video generative models to achieve large dynamic scene generation with flexib…</p>

  <p>🔗 <a href="https://x.com/rsasaki0109/status/2005484629047005480">ポストを見る</a></p>
</blockquote>

<hr />

<h2 id="-カテゴリ別ハイライト">📂 カテゴリ別ハイライト</h2>

<h3 id="️-3d再構成slam">🏗️ 3D再構成・SLAM</h3>
<p><img src="/assets/images/cat-3d-slam.svg" alt="3D再構成・SLAM" class="align-left" style="width:40px; margin-right:10px;" /></p>

<p><strong>12件</strong>のポスト</p>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-12/2006289237696741658.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/2006289237696741658">3D-RE-GEN 3D Reconstruction of Indoor Scenes with a Generative Framework We propose single-image 3D scene reconstruction for producing complete, edita...</a> (♥ 278)
</div></div>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-12/2005484629047005480.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/2005484629047005480">[ICCV 2025] InfiniCube: Unbounded and Controllable Dynamic 3D Driving Scene Generation with World-Guided Video Models InfiniCube take advantage of rec...</a> (♥ 157)
</div></div>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-12/2005084949020561746.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/2005084949020561746">DVGT: Driving Visual Geometry Transform DVGT, a universal visual geometry transformer for autonomous driving, directly predicts metric-scaled global 3...</a> (♥ 138)
</div></div>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-12/1998588491094175760.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/1998588491094175760">Register Any Point: Scaling 3D Point Cloud Registration by Flow Matching Our method for scalable multi-view point cloud registration. To register mult...</a> (♥ 126)
</div></div>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-12/2000663920546341232.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/2000663920546341232">VGGT-X: When VGGT Meets Dense Novel View Synthesis VGGT-X takes dense multi-view images as input. It first uses memory-efficient VGGT to losslessly pr...</a> (♥ 125)
</div></div>

<h3 id="-自動運転">🚗 自動運転</h3>
<p><img src="/assets/images/cat-autonomous.svg" alt="自動運転" class="align-left" style="width:40px; margin-right:10px;" /></p>

<p><strong>1件</strong>のポスト</p>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-12/1999281961018490945.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/1999281961018490945">[NeurIPS 2025] SPIRAL: Semantic-Aware Progressive LiDAR Scene Generation and Understanding Existing LiDAR generative models are limited to producing u...</a> (♥ 88)
</div></div>

<h3 id="-ロボティクス">🤖 ロボティクス</h3>
<p><img src="/assets/images/cat-robotics.svg" alt="ロボティクス" class="align-left" style="width:40px; margin-right:10px;" /></p>

<p><strong>1件</strong>のポスト</p>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-12/1999701889999638945.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/1999701889999638945">名古屋にSEQSENSEの警備ロボットいた！</a> (♥ 13)
</div></div>

<h3 id="-vlafoundation-model">🧠 VLA・Foundation Model</h3>
<p><img src="/assets/images/cat-vla.svg" alt="VLA・Foundation Model" class="align-left" style="width:40px; margin-right:10px;" /></p>

<p><strong>2件</strong>のポスト</p>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-12/2000400424927395986.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/2000400424927395986">OmniVGGT: Omni-Modality Driven Visual Geometry Grounded Transformer OmniVGGT is a spatial foundation model that can effectively benefit from an arbitr...</a> (♥ 178)
</div></div>

<ul>
  <li><a href="https://x.com/rsasaki0109/status/1998179880421433754">StreamVLN: Streaming Vision-and-Language Navigation via SlowFast Context Modeling StreamVLN generates action outputs from continuous video input in an…</a> (♥ 44)</li>
</ul>

<h3 id="-ossツール">🔧 OSS・ツール</h3>
<p><img src="/assets/images/cat-oss.svg" alt="OSS・ツール" class="align-left" style="width:40px; margin-right:10px;" /></p>

<p><strong>1件</strong>のポスト</p>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-12/2001050494567891026.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/2001050494567891026">Pi-Long: Extending π3's Capabilities on Kilometer-scale with the Framework of VGGT-Long</a> (♥ 143)
</div></div>

<h3 id="-その他">💬 その他</h3>

<p><strong>3件</strong>のポスト</p>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-12/2003299526845825130.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/2003299526845825130">Co-Me: Confidence Guided Token Merging for Visual Geometric Transformers</a> (♥ 99)
</div></div>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-12/1998860273734881508.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/1998860273734881508">nanoGRPO is a lightweight implementation of Group Relative Policy Optimization (GRPO)</a> (♥ 30)
</div></div>

<ul>
  <li><a href="https://x.com/rsasaki0109/status/2006187895548498194">新幹線間違えて母校最寄りで降りてしまった</a> (♥ 1)</li>
</ul>

<h2 id="-全ポスト一覧">📋 全ポスト一覧</h2>

<details>
<summary>全20件を表示</summary>

| # | ポスト | RT | ♥ | 日付 |
|--:|:-------|---:|--:|:-----|
| 1 | [3D-RE-GEN 3D Reconstruction of Indoor Scenes with a Generative Framework We prop...](https://x.com/rsasaki0109/status/2006289237696741658) | 32 | 278 | 2025-12-31 |
| 2 | [OmniVGGT: Omni-Modality Driven Visual Geometry Grounded Transformer OmniVGGT is ...](https://x.com/rsasaki0109/status/2000400424927395986) | 30 | 178 | 2025-12-15 |
| 3 | [[ICCV 2025] InfiniCube: Unbounded and Controllable Dynamic 3D Driving Scene Gene...](https://x.com/rsasaki0109/status/2005484629047005480) | 24 | 157 | 2025-12-29 |
| 4 | [Pi-Long: Extending π3's Capabilities on Kilometer-scale with the Framework of VG...](https://x.com/rsasaki0109/status/2001050494567891026) | 17 | 143 | 2025-12-16 |
| 5 | [DVGT: Driving Visual Geometry Transform DVGT, a universal visual geometry transf...](https://x.com/rsasaki0109/status/2005084949020561746) | 22 | 138 | 2025-12-28 |
| 6 | [Register Any Point: Scaling 3D Point Cloud Registration by Flow Matching Our met...](https://x.com/rsasaki0109/status/1998588491094175760) | 18 | 126 | 2025-12-10 |
| 7 | [VGGT-X: When VGGT Meets Dense Novel View Synthesis VGGT-X takes dense multi-view...](https://x.com/rsasaki0109/status/2000663920546341232) | 20 | 125 | 2025-12-15 |
| 8 | [Dataset release for RA-L 2025 paper "Towards Degradation-Robust High-Precision M...](https://x.com/rsasaki0109/status/2000003494913429549) | 24 | 123 | 2025-12-14 |
| 9 | [StereoSpace: Depth-Free Synthesis of Stereo Geometry via End-to-End Diffusion in...](https://x.com/rsasaki0109/status/2004386687393091907) | 19 | 116 | 2025-12-26 |
| 10 | [[AUTCON'25] BIMNet: Dataset and benchmark for as-built BIM reconstruction from r...](https://x.com/rsasaki0109/status/2005800420577526254) | 24 | 113 | 2025-12-30 |
| 11 | [Co-Me: Confidence Guided Token Merging for Visual Geometric Transformers](https://x.com/rsasaki0109/status/2003299526845825130) | 12 | 99 | 2025-12-23 |
| 12 | [E-RayZer: Self-supervised 3D Reconstruction as Spatial Visual Pre-training](https://x.com/rsasaki0109/status/2004695846965968932) | 14 | 89 | 2025-12-26 |
| 13 | [[NeurIPS 2025] SPIRAL: Semantic-Aware Progressive LiDAR Scene Generation and Und...](https://x.com/rsasaki0109/status/1999281961018490945) | 13 | 88 | 2025-12-12 |
| 14 | [[AAAI 2025] Offical implementation of "DrivingForward: Feed-forward 3D Gaussian ...](https://x.com/rsasaki0109/status/1999585050904658194) | 16 | 87 | 2025-12-12 |
| 15 | [[RA-L'25 &amp; IROS'25] II-NVM: Enhancing Map Accuracy and Consistency with Normal V...](https://x.com/rsasaki0109/status/2004024302891642986) | 9 | 76 | 2025-12-25 |
| 16 | [[CVPR 2025] DV-Matcher: Deformation-based Non-Rigid Point Cloud Matching Guided ...](https://x.com/rsasaki0109/status/2003661924949447109) | 13 | 64 | 2025-12-24 |
| 17 | [StreamVLN: Streaming Vision-and-Language Navigation via SlowFast Context Modelin...](https://x.com/rsasaki0109/status/1998179880421433754) | 6 | 44 | 2025-12-08 |
| 18 | [nanoGRPO is a lightweight implementation of Group Relative Policy Optimization (...](https://x.com/rsasaki0109/status/1998860273734881508) | 5 | 30 | 2025-12-10 |
| 19 | [名古屋にSEQSENSEの警備ロボットいた！](https://x.com/rsasaki0109/status/1999701889999638945) | 0 | 13 | 2025-12-13 |
| 20 | [新幹線間違えて母校最寄りで降りてしまった](https://x.com/rsasaki0109/status/2006187895548498194) | 0 | 1 | 2025-12-31 |

</details>]]></content><author><name>rsasaki0109</name></author><category term="monthly-summary" /><summary type="html"><![CDATA[20件のポスト｜3D再構成・SLAM (12件), 自動運転 (1件), ロボティクス (1件), VLA・Foundation Model (2件), OSS・ツール (1件), その他 (3件)｜1位: 3D-RE-GEN 3D Reconstruction of Indoor Scenes with a Generative Framework We prop...]]></summary></entry><entry><title type="html">2025年11月のポストまとめ</title><link href="https://rsasaki0109.github.io/rsasaki0109-tweet-summaries/2025/11/01/2025-11-summary/" rel="alternate" type="text/html" title="2025年11月のポストまとめ" /><published>2025-11-01T00:00:00+00:00</published><updated>2025-11-01T00:00:00+00:00</updated><id>https://rsasaki0109.github.io/rsasaki0109-tweet-summaries/2025/11/01/2025-11-summary</id><content type="html" xml:base="https://rsasaki0109.github.io/rsasaki0109-tweet-summaries/2025/11/01/2025-11-summary/"><![CDATA[<h2 id="-概要">📊 概要</h2>

<p><strong>20件</strong>のポスト（リプライ除く）を4カテゴリに分類しました。</p>

<table>
  <thead>
    <tr>
      <th style="text-align: left">カテゴリ</th>
      <th style="text-align: right">件数</th>
      <th style="text-align: right">割合</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td style="text-align: left">🏗️ 3D再構成・SLAM</td>
      <td style="text-align: right">14</td>
      <td style="text-align: right">██████████ 70%</td>
    </tr>
    <tr>
      <td style="text-align: left">🚗 自動運転</td>
      <td style="text-align: right">2</td>
      <td style="text-align: right">█ 10%</td>
    </tr>
    <tr>
      <td style="text-align: left">🤖 ロボティクス</td>
      <td style="text-align: right">3</td>
      <td style="text-align: right">██ 15%</td>
    </tr>
    <tr>
      <td style="text-align: left">📄 論文紹介</td>
      <td style="text-align: right">1</td>
      <td style="text-align: right">█ 5%</td>
    </tr>
  </tbody>
</table>

<h2 id="-人気トップ3">🏆 人気トップ3</h2>

<h3 id="-1位">🥇 1位</h3>

<table>
  <thead>
    <tr>
      <th style="text-align: left"> </th>
      <th style="text-align: left"> </th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td style="text-align: left"><strong>RT</strong></td>
      <td style="text-align: left">49</td>
    </tr>
    <tr>
      <td style="text-align: left"><strong>Like</strong></td>
      <td style="text-align: left">416</td>
    </tr>
    <tr>
      <td style="text-align: left"><strong>Views</strong></td>
      <td style="text-align: left">18000</td>
    </tr>
  </tbody>
</table>

<blockquote>
  <p>[CVPR 2025 Highlight] SLAM3R: Real-Time Dense Scene Reconstruction from Monocular RGB Videos SLAM3R is a real-time dense scene reconstruction system that regresses 3D points from video frames using feed-forward neural networks, without explicitly</p>

  <p>🔗 <a href="https://x.com/rsasaki0109/status/1991297750454214998">ポストを見る</a></p>
</blockquote>

<hr />

<h3 id="-2位">🥈 2位</h3>

<table>
  <thead>
    <tr>
      <th style="text-align: left"> </th>
      <th style="text-align: left"> </th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td style="text-align: left"><strong>RT</strong></td>
      <td style="text-align: left">42</td>
    </tr>
    <tr>
      <td style="text-align: left"><strong>Like</strong></td>
      <td style="text-align: left">316</td>
    </tr>
    <tr>
      <td style="text-align: left"><strong>Views</strong></td>
      <td style="text-align: left">14000</td>
    </tr>
  </tbody>
</table>

<blockquote>
  <p>[RA-L 2025] ActiveSplat: High-Fidelity Scene Reconstruction through Active Gaussian Splatting ActiveSplat enables the agent to explore the environment autonomously to build a 3D map on the fly. The integration of a Gaussian map and a Voronoi graph as…</p>

  <p>🔗 <a href="https://x.com/rsasaki0109/status/1991703117789098181">ポストを見る</a></p>
</blockquote>

<hr />

<h3 id="-3位">🥉 3位</h3>

<p><img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-11/1987449156118442012.jpg" alt="tweet image" style="max-width:100%; border-radius:8px; margin-bottom:12px;" /></p>

<table>
  <thead>
    <tr>
      <th style="text-align: left"> </th>
      <th style="text-align: left"> </th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td style="text-align: left"><strong>RT</strong></td>
      <td style="text-align: left">33</td>
    </tr>
    <tr>
      <td style="text-align: left"><strong>Like</strong></td>
      <td style="text-align: left">224</td>
    </tr>
    <tr>
      <td style="text-align: left"><strong>Views</strong></td>
      <td style="text-align: left">11000</td>
    </tr>
  </tbody>
</table>

<blockquote>
  <p>NavMap NavMap is an open-source C++ and ROS 2 library for representing navigable surfaces for mobile robot navigation and localization. Unlike classic grid-based maps, NavMap stores the environment as triangular meshes (NavCels), enabling efficient q…</p>

  <p>🔗 <a href="https://x.com/rsasaki0109/status/1987449156118442012">ポストを見る</a></p>
</blockquote>

<hr />

<h2 id="-カテゴリ別ハイライト">📂 カテゴリ別ハイライト</h2>

<h3 id="️-3d再構成slam">🏗️ 3D再構成・SLAM</h3>
<p><img src="/assets/images/cat-3d-slam.svg" alt="3D再構成・SLAM" class="align-left" style="width:40px; margin-right:10px;" /></p>

<p><strong>14件</strong>のポスト</p>

<ul>
  <li><a href="https://x.com/rsasaki0109/status/1991297750454214998">[CVPR 2025 Highlight] SLAM3R: Real-Time Dense Scene Reconstruction from Monocular RGB Videos SLAM3R is a real-time dense scene reconstruction system t…</a> (♥ 416)</li>
  <li><a href="https://x.com/rsasaki0109/status/1991703117789098181">[RA-L 2025] ActiveSplat: High-Fidelity Scene Reconstruction through Active Gaussian Splatting ActiveSplat enables the agent to explore the environment…</a> (♥ 316)</li>
</ul>
<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-11/1987449156118442012.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/1987449156118442012">NavMap NavMap is an open-source C++ and ROS 2 library for representing navigable surfaces for mobile robot navigation and localization. Unlike classic...</a> (♥ 224)
</div></div>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-11/1990049507497939291.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/1990049507497939291">Official implementation of "S2M2: Scalable Stereo Matching Model for Reliable Depth Estimation, ICCV 2025"</a> (♥ 211)
</div></div>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-11/1989509414957846952.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/1989509414957846952">LightGlueStick a Fast and Robust Glue for Joint Point-Line Matching LightGlueStick adaptively adjusts its depth based on image difficulty, exiting aft...</a> (♥ 147)
</div></div>

<h3 id="-自動運転">🚗 自動運転</h3>
<p><img src="/assets/images/cat-autonomous.svg" alt="自動運転" class="align-left" style="width:40px; margin-right:10px;" /></p>

<p><strong>2件</strong>のポスト</p>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-11/1993843996502946143.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/1993843996502946143">form FORM is a LiDAR Odometry system that performs fixed-lag smoothing and sub-map reparations, all in real-time with minimal parameters.</a> (♥ 113)
</div></div>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-11/1994239834987143589.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/1994239834987143589">[AAAI 2026 Oral] LiDARCrafter: Dynamic 4D World Modeling from LiDAR Sequences</a> (♥ 63)
</div></div>

<h3 id="-ロボティクス">🤖 ロボティクス</h3>
<p><img src="/assets/images/cat-robotics.svg" alt="ロボティクス" class="align-left" style="width:40px; margin-right:10px;" /></p>

<p><strong>3件</strong>のポスト</p>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-11/1994609700500103558.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/1994609700500103558">PHUMA: Physically-Grounded Humanoid Locomotion Dataset</a> (♥ 129)
</div></div>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-11/1995281044057338361.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/1995281044057338361">TAVP Learning to See and Act: Task-Aware Virtual View Exploration for Robotic Manipulation TVVE employs an efficient exploration policy (MVEP), accele...</a> (♥ 55)
</div></div>

<ul>
  <li><a href="https://x.com/rsasaki0109/status/1993454653330407823">lwrclpy-for-FastDDSv3 An rclpy-compatible Python library built directly on Fast DDS v3—designed to solve the friction of using ROS 2 with Python ML/AI…</a> (♥ 27)</li>
</ul>

<h3 id="-論文紹介">📄 論文紹介</h3>
<p><img src="/assets/images/cat-paper.svg" alt="論文紹介" class="align-left" style="width:40px; margin-right:10px;" /></p>

<p><strong>1件</strong>のポスト</p>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-11/1990615957266465129.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/1990615957266465129">[arXiv 2025] Generative View Stitching This is the official repository for the paper Generative View Stitching (GVS), which enables collision-free cam...</a> (♥ 70)
</div></div>

<h2 id="-全ポスト一覧">📋 全ポスト一覧</h2>

<details>
<summary>全20件を表示</summary>

| # | ポスト | RT | ♥ | 日付 |
|--:|:-------|---:|--:|:-----|
| 1 | [[CVPR 2025 Highlight] SLAM3R: Real-Time Dense Scene Reconstruction from Monocula...](https://x.com/rsasaki0109/status/1991297750454214998) | 49 | 416 | 2025-11-20 |
| 2 | [[RA-L 2025] ActiveSplat: High-Fidelity Scene Reconstruction through Active Gauss...](https://x.com/rsasaki0109/status/1991703117789098181) | 42 | 316 | 2025-11-21 |
| 3 | [NavMap NavMap is an open-source C++ and ROS 2 library for representing navigable...](https://x.com/rsasaki0109/status/1987449156118442012) | 33 | 224 | 2025-11-09 |
| 4 | [Official implementation of "S2M2: Scalable Stereo Matching Model for Reliable De...](https://x.com/rsasaki0109/status/1990049507497939291) | 30 | 211 | 2025-11-16 |
| 5 | [LightGlueStick a Fast and Robust Glue for Joint Point-Line Matching LightGlueSti...](https://x.com/rsasaki0109/status/1989509414957846952) | 19 | 147 | 2025-11-15 |
| 6 | [[ICCV'25] 3D-MOOD: Lifting 2D to 3D for Monocular Open-Set Object Detection](https://x.com/rsasaki0109/status/1990253575772028969) | 19 | 141 | 2025-11-17 |
| 7 | [Online-3DGS-Monocular Monocular Online Reconstruction with Enhanced Detail Prese...](https://x.com/rsasaki0109/status/1993114656924483695) | 27 | 139 | 2025-11-25 |
| 8 | [Incrementally Building Room-Scale Language-Embedded Gaussian Splats (LEGS) with ...](https://x.com/rsasaki0109/status/1995123360380891343) | 26 | 130 | 2025-11-30 |
| 9 | [PHUMA: Physically-Grounded Humanoid Locomotion Dataset](https://x.com/rsasaki0109/status/1994609700500103558) | 30 | 129 | 2025-11-29 |
| 10 | [OVO Official repository of "Open-Vocabulary Online Semantic Mapping for SLAM"](https://x.com/rsasaki0109/status/1988441622414270548) | 18 | 119 | 2025-11-12 |
| 11 | [form FORM is a LiDAR Odometry system that performs fixed-lag smoothing and sub-m...](https://x.com/rsasaki0109/status/1993843996502946143) | 19 | 113 | 2025-11-27 |
| 12 | [[IEEE IROS'25] GSPR: Multimodal Place Recognition using 3D Gaussian Splatting fo...](https://x.com/rsasaki0109/status/1987716856627474735) | 19 | 99 | 2025-11-10 |
| 13 | [PointSt3R: Point Tracking Through 3D Grounded Correspondence This is the officia...](https://x.com/rsasaki0109/status/1990978344930656432) | 15 | 89 | 2025-11-19 |
| 14 | [D-LIO: 6DoF Direct LiDAR-Inertial Odometry based on Simultaneous Truncated Dista...](https://x.com/rsasaki0109/status/1988730313787478269) | 14 | 87 | 2025-11-12 |
| 15 | [[arXiv 2025] Generative View Stitching This is the official repository for the p...](https://x.com/rsasaki0109/status/1990615957266465129) | 7 | 70 | 2025-11-18 |
| 16 | [[AAAI 2026 Oral] LiDARCrafter: Dynamic 4D World Modeling from LiDAR Sequences](https://x.com/rsasaki0109/status/1994239834987143589) | 11 | 63 | 2025-11-28 |
| 17 | [TAVP Learning to See and Act: Task-Aware Virtual View Exploration for Robotic Ma...](https://x.com/rsasaki0109/status/1995281044057338361) | 12 | 55 | 2025-11-30 |
| 18 | [lwrclpy-for-FastDDSv3 An rclpy-compatible Python library built directly on Fast ...](https://x.com/rsasaki0109/status/1993454653330407823) | 7 | 27 | 2025-11-25 |
| 19 | [DCReg: Decoupled Characterization for Efficient Degenerate LiDAR Registration DC...](https://x.com/rsasaki0109/status/1988079238126129506) | 5 | 26 | 2025-11-11 |
| 20 | [[RA-L] SHeRLoc: Synchronized Heterogeneous Radar Place Recognition for Cross-Mod...](https://x.com/rsasaki0109/status/1989122369814950304) | 10 | 20 | 2025-11-14 |

</details>]]></content><author><name>rsasaki0109</name></author><category term="monthly-summary" /><summary type="html"><![CDATA[20件のポスト｜3D再構成・SLAM (14件), 自動運転 (2件), ロボティクス (3件), 論文紹介 (1件)｜1位: [CVPR 2025 Highlight] SLAM3R: Real-Time Dense Scene Reconstruction from Monocula...]]></summary></entry><entry><title type="html">2025年10月のポストまとめ</title><link href="https://rsasaki0109.github.io/rsasaki0109-tweet-summaries/2025/10/01/2025-10-summary/" rel="alternate" type="text/html" title="2025年10月のポストまとめ" /><published>2025-10-01T00:00:00+00:00</published><updated>2025-10-01T00:00:00+00:00</updated><id>https://rsasaki0109.github.io/rsasaki0109-tweet-summaries/2025/10/01/2025-10-summary</id><content type="html" xml:base="https://rsasaki0109.github.io/rsasaki0109-tweet-summaries/2025/10/01/2025-10-summary/"><![CDATA[<h2 id="-概要">📊 概要</h2>

<p><strong>17件</strong>のポスト（リプライ除く）を4カテゴリに分類しました。</p>

<table>
  <thead>
    <tr>
      <th style="text-align: left">カテゴリ</th>
      <th style="text-align: right">件数</th>
      <th style="text-align: right">割合</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td style="text-align: left">🏗️ 3D再構成・SLAM</td>
      <td style="text-align: right">10</td>
      <td style="text-align: right">██████████ 59%</td>
    </tr>
    <tr>
      <td style="text-align: left">🚗 自動運転</td>
      <td style="text-align: right">2</td>
      <td style="text-align: right">██ 12%</td>
    </tr>
    <tr>
      <td style="text-align: left">🤖 ロボティクス</td>
      <td style="text-align: right">1</td>
      <td style="text-align: right">█ 6%</td>
    </tr>
    <tr>
      <td style="text-align: left">💬 その他</td>
      <td style="text-align: right">4</td>
      <td style="text-align: right">████ 24%</td>
    </tr>
  </tbody>
</table>

<h2 id="-人気トップ3">🏆 人気トップ3</h2>

<h3 id="-1位">🥇 1位</h3>

<table>
  <thead>
    <tr>
      <th style="text-align: left"> </th>
      <th style="text-align: left"> </th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td style="text-align: left"><strong>RT</strong></td>
      <td style="text-align: left">22</td>
    </tr>
    <tr>
      <td style="text-align: left"><strong>Like</strong></td>
      <td style="text-align: left">203</td>
    </tr>
    <tr>
      <td style="text-align: left"><strong>Views</strong></td>
      <td style="text-align: left">11000</td>
    </tr>
  </tbody>
</table>

<blockquote>
  <p>TTT3R: 3D Reconstruction as Test-Time Training TL;DR: A simple state update rule to enhance length generalization for CUT3R.</p>

  <p>🔗 <a href="https://x.com/rsasaki0109/status/1977892535248019519">ポストを見る</a></p>
</blockquote>

<hr />

<h3 id="-2位">🥈 2位</h3>

<p><img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-10/1979368712357728497.jpg" alt="tweet image" style="max-width:100%; border-radius:8px; margin-bottom:12px;" /></p>

<table>
  <thead>
    <tr>
      <th style="text-align: left"> </th>
      <th style="text-align: left"> </th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td style="text-align: left"><strong>RT</strong></td>
      <td style="text-align: left">25</td>
    </tr>
    <tr>
      <td style="text-align: left"><strong>Like</strong></td>
      <td style="text-align: left">194</td>
    </tr>
    <tr>
      <td style="text-align: left"><strong>Views</strong></td>
      <td style="text-align: left">8700</td>
    </tr>
  </tbody>
</table>

<blockquote>
  <p>[NeurIPS 2025] Pixel-Perfect Depth This work presents Pixel-Perfect Depth, a monocular depth estimation model with pixel-space diffusion transformers. Compared to existing discriminative and generative models, its estimated depth maps can produce</p>

  <p>🔗 <a href="https://x.com/rsasaki0109/status/1979368712357728497">ポストを見る</a></p>
</blockquote>

<hr />

<h3 id="-3位">🥉 3位</h3>

<p><img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-10/1982241079249281123.jpg" alt="tweet image" style="max-width:100%; border-radius:8px; margin-bottom:12px;" /></p>

<table>
  <thead>
    <tr>
      <th style="text-align: left"> </th>
      <th style="text-align: left"> </th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td style="text-align: left"><strong>RT</strong></td>
      <td style="text-align: left">23</td>
    </tr>
    <tr>
      <td style="text-align: left"><strong>Like</strong></td>
      <td style="text-align: left">191</td>
    </tr>
    <tr>
      <td style="text-align: left"><strong>Views</strong></td>
      <td style="text-align: left">10000</td>
    </tr>
  </tbody>
</table>

<blockquote>
  <p>Trace Anything: Representing Any Video in 4D via Trajectory Fields</p>

  <p>🔗 <a href="https://x.com/rsasaki0109/status/1982241079249281123">ポストを見る</a></p>
</blockquote>

<hr />

<h2 id="-カテゴリ別ハイライト">📂 カテゴリ別ハイライト</h2>

<h3 id="️-3d再構成slam">🏗️ 3D再構成・SLAM</h3>
<p><img src="/assets/images/cat-3d-slam.svg" alt="3D再構成・SLAM" class="align-left" style="width:40px; margin-right:10px;" /></p>

<p><strong>10件</strong>のポスト</p>

<ul>
  <li><a href="https://x.com/rsasaki0109/status/1977892535248019519">TTT3R: 3D Reconstruction as Test-Time Training TL;DR: A simple state update rule to enhance length generalization for CUT3R.</a> (♥ 203)</li>
</ul>
<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-10/1979368712357728497.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/1979368712357728497">[NeurIPS 2025] Pixel-Perfect Depth This work presents Pixel-Perfect Depth, a monocular depth estimation model with pixel-space diffusion transformers....</a> (♥ 194)
</div></div>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-10/1980068025035616525.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/1980068025035616525">[AAAI 2025 Oral] FlowPolicy: Enabling Fast and Robust 3D Flow-based Policy via Consistency Flow Matching for Robot Manipulation</a> (♥ 131)
</div></div>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-10/1982928596152053802.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/1982928596152053802">LiteVLoc: Map-Lite Visual Localization for Image-Goal Navigation (ICRA2025) LiteVLoc is a hierarchical visual localization framework designed to enabl...</a> (♥ 113)
</div></div>

<ul>
  <li><a href="https://x.com/rsasaki0109/status/1978657160792871070">USplat4D: Uncertainty Matters in Dynamic Gaussian Splatting for Monocular 4D Reconstruction</a> (♥ 100)</li>
</ul>

<h3 id="-自動運転">🚗 自動運転</h3>
<p><img src="/assets/images/cat-autonomous.svg" alt="自動運転" class="align-left" style="width:40px; margin-right:10px;" /></p>

<p><strong>2件</strong>のポスト</p>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-10/1977235679932563612.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/1977235679932563612">autonomous-valet-parking Framework for Autonomous Valet Parking (AVP) with Autoware. Combines a YOLO detection server, Unity parking spot scripts, and...</a> (♥ 137)
</div></div>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-10/1983312108025721096.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/1983312108025721096">visibility_rrt The Visibility-Aware RRT* implementation for safety-critical navigation with perception-limited robots.[RA-L 2025]</a> (♥ 63)
</div></div>

<h3 id="-ロボティクス">🤖 ロボティクス</h3>
<p><img src="/assets/images/cat-robotics.svg" alt="ロボティクス" class="align-left" style="width:40px; margin-right:10px;" /></p>

<p><strong>1件</strong>のポスト</p>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-10/1981163623411306933.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/1981163623411306933">[IEEE RA-L] Air-IO: a learning-based IO framework targeted for UAV</a> (♥ 15)
</div></div>

<h3 id="-その他">💬 その他</h3>

<p><strong>4件</strong>のポスト</p>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-10/1982241079249281123.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/1982241079249281123">Trace Anything: Representing Any Video in 4D via Trajectory Fields</a> (♥ 191)
</div></div>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-10/1983730590966047110.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/1983730590966047110">online_adaptive_cbf Implementation of the Online Adaptive CBF(Cost Barrier Function) for safety-critical navigation for input constrained systems.</a> (♥ 47)
</div></div>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-10/1981519821088182549.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/1981519821088182549">Fast-ShapeAndPose Shape and pose estimation via eigenproblem with eigenvector nonlinearities. Category-Level Shape and Pose Estimation in Less Than a ...</a> (♥ 38)
</div></div>

<ul>
  <li><a href="https://x.com/rsasaki0109/status/1980577802828341730">国際学会の時期になるとフォロワー数が伸びる</a> (♥ 2)</li>
</ul>

<h2 id="-全ポスト一覧">📋 全ポスト一覧</h2>

<details>
<summary>全17件を表示</summary>

| # | ポスト | RT | ♥ | 日付 |
|--:|:-------|---:|--:|:-----|
| 1 | [TTT3R: 3D Reconstruction as Test-Time Training TL;DR: A simple state update rule...](https://x.com/rsasaki0109/status/1977892535248019519) | 22 | 203 | 2025-10-14 |
| 2 | [[NeurIPS 2025] Pixel-Perfect Depth This work presents Pixel-Perfect Depth, a mon...](https://x.com/rsasaki0109/status/1979368712357728497) | 25 | 194 | 2025-10-18 |
| 3 | [Trace Anything: Representing Any Video in 4D via Trajectory Fields](https://x.com/rsasaki0109/status/1982241079249281123) | 23 | 191 | 2025-10-26 |
| 4 | [autonomous-valet-parking Framework for Autonomous Valet Parking (AVP) with Autow...](https://x.com/rsasaki0109/status/1977235679932563612) | 27 | 137 | 2025-10-12 |
| 5 | [[AAAI 2025 Oral] FlowPolicy: Enabling Fast and Robust 3D Flow-based Policy via C...](https://x.com/rsasaki0109/status/1980068025035616525) | 19 | 131 | 2025-10-20 |
| 6 | [LiteVLoc: Map-Lite Visual Localization for Image-Goal Navigation (ICRA2025) Lite...](https://x.com/rsasaki0109/status/1982928596152053802) | 22 | 113 | 2025-10-27 |
| 7 | [USplat4D: Uncertainty Matters in Dynamic Gaussian Splatting for Monocular 4D Rec...](https://x.com/rsasaki0109/status/1978657160792871070) | 10 | 100 | 2025-10-16 |
| 8 | [pi3det [ICCV 2025] Perspective-Invariant 3D Object Detection](https://x.com/rsasaki0109/status/1979816073911488678) | 9 | 75 | 2025-10-19 |
| 9 | [visibility_rrt The Visibility-Aware RRT* implementation for safety-critical navi...](https://x.com/rsasaki0109/status/1983312108025721096) | 7 | 63 | 2025-10-28 |
| 10 | [online_adaptive_cbf Implementation of the Online Adaptive CBF(Cost Barrier Funct...](https://x.com/rsasaki0109/status/1983730590966047110) | 10 | 47 | 2025-10-30 |
| 11 | [[RA-L'25] CLID-SLAM: A Coupled LiDAR-Inertial Neural Implicit Dense SLAM with Re...](https://x.com/rsasaki0109/status/1977586187914318207) | 5 | 44 | 2025-10-13 |
| 12 | [btsa A Dynamic-Aware LIO Framework Via Spatio-Temporal Normal Analysis &gt; Figure ...](https://x.com/rsasaki0109/status/1982562738191954299) | 10 | 42 | 2025-10-26 |
| 13 | [Fast-ShapeAndPose Shape and pose estimation via eigenproblem with eigenvector no...](https://x.com/rsasaki0109/status/1981519821088182549) | 5 | 38 | 2025-10-24 |
| 14 | [Mesh-RFT: Enhancing Mesh Generation via Fine-grained Reinforcement Fine-Tuning](https://x.com/rsasaki0109/status/1980779349138980888) | 2 | 17 | 2025-10-21 |
| 15 | [[IEEE RA-L] Air-IO: a learning-based IO framework targeted for UAV](https://x.com/rsasaki0109/status/1981163623411306933) | 4 | 15 | 2025-10-23 |
| 16 | [MaterialRefGS: Reflective Gaussian Splatting with Multi-view Consistent Material...](https://x.com/rsasaki0109/status/1978981221389734126) | 0 | 13 | 2025-10-17 |
| 17 | [国際学会の時期になるとフォロワー数が伸びる](https://x.com/rsasaki0109/status/1980577802828341730) | 0 | 2 | 2025-10-21 |

</details>]]></content><author><name>rsasaki0109</name></author><category term="monthly-summary" /><summary type="html"><![CDATA[17件のポスト｜3D再構成・SLAM (10件), 自動運転 (2件), ロボティクス (1件), その他 (4件)｜1位: TTT3R: 3D Reconstruction as Test-Time Training TL;DR: A simple state update rule...]]></summary></entry><entry><title type="html">2025年9月のポストまとめ</title><link href="https://rsasaki0109.github.io/rsasaki0109-tweet-summaries/2025/09/01/2025-09-summary/" rel="alternate" type="text/html" title="2025年9月のポストまとめ" /><published>2025-09-01T00:00:00+00:00</published><updated>2025-09-01T00:00:00+00:00</updated><id>https://rsasaki0109.github.io/rsasaki0109-tweet-summaries/2025/09/01/2025-09-summary</id><content type="html" xml:base="https://rsasaki0109.github.io/rsasaki0109-tweet-summaries/2025/09/01/2025-09-summary/"><![CDATA[<h2 id="-概要">📊 概要</h2>

<p><strong>20件</strong>のポスト（リプライ除く）を5カテゴリに分類しました。</p>

<table>
  <thead>
    <tr>
      <th style="text-align: left">カテゴリ</th>
      <th style="text-align: right">件数</th>
      <th style="text-align: right">割合</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td style="text-align: left">🏗️ 3D再構成・SLAM</td>
      <td style="text-align: right">10</td>
      <td style="text-align: right">██████████ 50%</td>
    </tr>
    <tr>
      <td style="text-align: left">🚗 自動運転</td>
      <td style="text-align: right">4</td>
      <td style="text-align: right">████ 20%</td>
    </tr>
    <tr>
      <td style="text-align: left">🤖 ロボティクス</td>
      <td style="text-align: right">2</td>
      <td style="text-align: right">██ 10%</td>
    </tr>
    <tr>
      <td style="text-align: left">🧠 VLA・Foundation Model</td>
      <td style="text-align: right">1</td>
      <td style="text-align: right">█ 5%</td>
    </tr>
    <tr>
      <td style="text-align: left">📄 論文紹介</td>
      <td style="text-align: right">3</td>
      <td style="text-align: right">███ 15%</td>
    </tr>
  </tbody>
</table>

<h2 id="-人気トップ3">🏆 人気トップ3</h2>

<h3 id="-1位">🥇 1位</h3>

<p><img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-09/1971756431117635694.jpg" alt="tweet image" style="max-width:100%; border-radius:8px; margin-bottom:12px;" /></p>

<table>
  <thead>
    <tr>
      <th style="text-align: left"> </th>
      <th style="text-align: left"> </th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td style="text-align: left"><strong>RT</strong></td>
      <td style="text-align: left">70</td>
    </tr>
    <tr>
      <td style="text-align: left"><strong>Like</strong></td>
      <td style="text-align: left">533</td>
    </tr>
    <tr>
      <td style="text-align: left"><strong>Views</strong></td>
      <td style="text-align: left">33000</td>
    </tr>
  </tbody>
</table>

<blockquote>
  <p>[NeurIPS 2025] Official Implementation of DINO-Foresight: Looking into the Future with DINO</p>

  <p>🔗 <a href="https://x.com/rsasaki0109/status/1971756431117635694">ポストを見る</a></p>
</blockquote>

<hr />

<h3 id="-2位">🥈 2位</h3>

<p><img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-09/1964849853089108285.jpg" alt="tweet image" style="max-width:100%; border-radius:8px; margin-bottom:12px;" /></p>

<table>
  <thead>
    <tr>
      <th style="text-align: left"> </th>
      <th style="text-align: left"> </th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td style="text-align: left"><strong>RT</strong></td>
      <td style="text-align: left">69</td>
    </tr>
    <tr>
      <td style="text-align: left"><strong>Like</strong></td>
      <td style="text-align: left">395</td>
    </tr>
    <tr>
      <td style="text-align: left"><strong>Views</strong></td>
      <td style="text-align: left">22000</td>
    </tr>
  </tbody>
</table>

<blockquote>
  <p>RKO_LIO - LiDAR-Inertial Odometry Without Sensor-Specific Modelling Four different platforms, four different environments, one odometry system</p>

  <p>🔗 <a href="https://x.com/rsasaki0109/status/1964849853089108285">ポストを見る</a></p>
</blockquote>

<hr />

<h3 id="-3位">🥉 3位</h3>

<p><img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-09/1965611185971015864.jpg" alt="tweet image" style="max-width:100%; border-radius:8px; margin-bottom:12px;" /></p>

<table>
  <thead>
    <tr>
      <th style="text-align: left"> </th>
      <th style="text-align: left"> </th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td style="text-align: left"><strong>RT</strong></td>
      <td style="text-align: left">32</td>
    </tr>
    <tr>
      <td style="text-align: left"><strong>Like</strong></td>
      <td style="text-align: left">257</td>
    </tr>
    <tr>
      <td style="text-align: left"><strong>Views</strong></td>
      <td style="text-align: left">15000</td>
    </tr>
  </tbody>
</table>

<blockquote>
  <p>[CVPR 25] Vid2Sim: Realistic and Interactive Simulation from Video for Urban Navigation Vid2Sim is a novel framework that converts monocular videos into photorealistic and physically interactive simulation environments for training embodied agents wi…</p>

  <p>🔗 <a href="https://x.com/rsasaki0109/status/1965611185971015864">ポストを見る</a></p>
</blockquote>

<hr />

<h2 id="-カテゴリ別ハイライト">📂 カテゴリ別ハイライト</h2>

<h3 id="️-3d再構成slam">🏗️ 3D再構成・SLAM</h3>
<p><img src="/assets/images/cat-3d-slam.svg" alt="3D再構成・SLAM" class="align-left" style="width:40px; margin-right:10px;" /></p>

<p><strong>10件</strong>のポスト</p>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-09/1969245262075085309.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/1969245262075085309">MapAnything: Universal Feed-Forward Metric 3D Reconstruction Meta/Carnegie Mellon University MapAnything is a simple, end-to-end trained transformer m...</a> (♥ 225)
</div></div>

<ul>
  <li><a href="https://x.com/rsasaki0109/status/1966762922031673753">ViSTA-SLAM: Visual SLAM with Symmetric Two-view Association ViSTA-SLAM is a real-time monocular dense SLAM pipeline that combines a Symmetric Two-view…</a> (♥ 180)</li>
  <li><a href="https://x.com/rsasaki0109/status/1965248799255003201">ORB-SLAM-Python ORB_SLAM3 Python Bindings</a> (♥ 155)</li>
  <li><a href="https://x.com/rsasaki0109/status/1971409393427284455">[CoRL 2025] ActLoc: Learning to Localize on the Move via Active Viewpoint Selection &gt; We present ActLoc, a learning-based approach for active viewpoin…</a> (♥ 150)</li>
</ul>
<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-09/1968510293937393687.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/1968510293937393687">Rohbau3D A Shell Construction Site 3D Point Cloud Dataset &gt; We introduce Rohbau3D, a novel dataset of 3D point clouds that realistically represent ind...</a> (♥ 134)
</div></div>

<h3 id="-自動運転">🚗 自動運転</h3>
<p><img src="/assets/images/cat-autonomous.svg" alt="自動運転" class="align-left" style="width:40px; margin-right:10px;" /></p>

<p><strong>4件</strong>のポスト</p>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-09/1964849853089108285.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/1964849853089108285">RKO_LIO - LiDAR-Inertial Odometry Without Sensor-Specific Modelling Four different platforms, four different environments, one odometry system</a> (♥ 395)
</div></div>

<ul>
  <li><a href="https://x.com/rsasaki0109/status/1963436866155249934">DWPP: Dynamic Window Pure Pursuit for Robot Path Tracking Considering Velocity and Acceleration Constraints</a> (♥ 41)</li>
  <li><a href="https://x.com/rsasaki0109/status/1968872682293748193">gnss-lidar-dataprocessing Processes gnss/lidar data from raw files PCAP conversion files (PCAPtoROS) can be built in a ros2 workspace. You can just pu…</a> (♥ 26)</li>
</ul>
<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-09/1971047011555459302.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/1971047011555459302">FAPP [T-RO'24] Fast and Adaptive Perception and Planning for UAVs in Dynamic Cluttered Environments</a> (♥ 20)
</div></div>

<h3 id="-ロボティクス">🤖 ロボティクス</h3>
<p><img src="/assets/images/cat-robotics.svg" alt="ロボティクス" class="align-left" style="width:40px; margin-right:10px;" /></p>

<p><strong>2件</strong>のポスト</p>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-09/1965973572586803255.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/1965973572586803255">learn-to-calibrate [IROS2025] The World's First RL-based Sensor Calibration Method: A Targetless, User-friendly, and Robust Approach. L2Calib: SE(3)-M...</a> (♥ 68)
</div></div>

<ul>
  <li><a href="https://x.com/rsasaki0109/status/1968147901390991851">dora-rs DORA (Dataflow-Oriented Robotic Architecture) is middleware designed to streamline and simplify the creation of AI-based robotic applications….</a> (♥ 26)</li>
</ul>

<h3 id="-vlafoundation-model">🧠 VLA・Foundation Model</h3>
<p><img src="/assets/images/cat-vla.svg" alt="VLA・Foundation Model" class="align-left" style="width:40px; margin-right:10px;" /></p>

<p><strong>1件</strong>のポスト</p>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-09/1965611185971015864.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/1965611185971015864">[CVPR 25] Vid2Sim: Realistic and Interactive Simulation from Video for Urban Navigation Vid2Sim is a novel framework that converts monocular videos in...</a> (♥ 257)
</div></div>

<h3 id="-論文紹介">📄 論文紹介</h3>
<p><img src="/assets/images/cat-paper.svg" alt="論文紹介" class="align-left" style="width:40px; margin-right:10px;" /></p>

<p><strong>3件</strong>のポスト</p>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-09/1971756431117635694.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/1971756431117635694">[NeurIPS 2025] Official Implementation of DINO-Foresight: Looking into the Future with DINO</a> (♥ 533)
</div></div>

<ul>
  <li><a href="https://x.com/rsasaki0109/status/1972496555975512220">[NeurIPS 2025 (Spotlight)] The implementation for the paper “4DGT Learning a 4D Gaussian Transformer Using Real-World Monocular Videos”</a> (♥ 224)</li>
  <li><a href="https://x.com/rsasaki0109/status/1963799813243486650">Awesome-Image-Matching Bibliographic list for papers of image matching</a> (♥ 48)</li>
</ul>

<h2 id="-全ポスト一覧">📋 全ポスト一覧</h2>

<details>
<summary>全20件を表示</summary>

| # | ポスト | RT | ♥ | 日付 |
|--:|:-------|---:|--:|:-----|
| 1 | [[NeurIPS 2025] Official Implementation of DINO-Foresight: Looking into the Futur...](https://x.com/rsasaki0109/status/1971756431117635694) | 70 | 533 | 2025-09-27 |
| 2 | [RKO_LIO - LiDAR-Inertial Odometry Without Sensor-Specific Modelling Four differe...](https://x.com/rsasaki0109/status/1964849853089108285) | 69 | 395 | 2025-09-08 |
| 3 | [[CVPR 25] Vid2Sim: Realistic and Interactive Simulation from Video for Urban Nav...](https://x.com/rsasaki0109/status/1965611185971015864) | 32 | 257 | 2025-09-10 |
| 4 | [MapAnything: Universal Feed-Forward Metric 3D Reconstruction Meta/Carnegie Mello...](https://x.com/rsasaki0109/status/1969245262075085309) | 25 | 225 | 2025-09-20 |
| 5 | [[NeurIPS 2025 (Spotlight)] The implementation for the paper "4DGT Learning a 4D ...](https://x.com/rsasaki0109/status/1972496555975512220) | 32 | 224 | 2025-09-29 |
| 6 | [ViSTA-SLAM: Visual SLAM with Symmetric Two-view Association ViSTA-SLAM is a real...](https://x.com/rsasaki0109/status/1966762922031673753) | 23 | 180 | 2025-09-13 |
| 7 | [ORB-SLAM-Python ORB_SLAM3 Python Bindings](https://x.com/rsasaki0109/status/1965248799255003201) | 29 | 155 | 2025-09-09 |
| 8 | [[CoRL 2025] ActLoc: Learning to Localize on the Move via Active Viewpoint Select...](https://x.com/rsasaki0109/status/1971409393427284455) | 26 | 150 | 2025-09-26 |
| 9 | [Rohbau3D A Shell Construction Site 3D Point Cloud Dataset &gt; We introduce Rohbau3...](https://x.com/rsasaki0109/status/1968510293937393687) | 15 | 134 | 2025-09-18 |
| 10 | [[T-RO'25] HiMo: High-Speed Objects Motion Compensation in Point Clouds](https://x.com/rsasaki0109/status/1967360673652048233) | 15 | 101 | 2025-09-14 |
| 11 | [SAIL-Recon: Large SfM by Augmenting Scene Regression with Localization](https://x.com/rsasaki0109/status/1967785515534762455) | 17 | 97 | 2025-09-16 |
| 12 | [learn-to-calibrate [IROS2025] The World's First RL-based Sensor Calibration Meth...](https://x.com/rsasaki0109/status/1965973572586803255) | 16 | 68 | 2025-09-11 |
| 13 | [Awesome-Image-Matching Bibliographic list for papers of image matching](https://x.com/rsasaki0109/status/1963799813243486650) | 4 | 48 | 2025-09-05 |
| 14 | [DWPP: Dynamic Window Pure Pursuit for Robot Path Tracking Considering Velocity a...](https://x.com/rsasaki0109/status/1963436866155249934) | 6 | 41 | 2025-09-04 |
| 15 | [Super-LIO A Robust and Efficient LiDAR-Inertial Odometry System with a Compact M...](https://x.com/rsasaki0109/status/1970684625489190986) | 4 | 35 | 2025-09-24 |
| 16 | [EYOC Extend Your Own Correspondences: Unsupervised Distant Point Cloud Registrat...](https://x.com/rsasaki0109/status/1966341028765643264) | 5 | 30 | 2025-09-12 |
| 17 | [gnss-lidar-dataprocessing Processes gnss/lidar data from raw files PCAP conversi...](https://x.com/rsasaki0109/status/1968872682293748193) | 3 | 26 | 2025-09-19 |
| 18 | [dora-rs DORA (Dataflow-Oriented Robotic Architecture) is middleware designed to ...](https://x.com/rsasaki0109/status/1968147901390991851) | 5 | 26 | 2025-09-17 |
| 19 | [DISCOVERSE: Efficient Robot Simulation in Complex High-Fidelity Environments A u...](https://x.com/rsasaki0109/status/1972964648547668087) | 3 | 25 | 2025-09-30 |
| 20 | [FAPP [T-RO'24] Fast and Adaptive Perception and Planning for UAVs in Dynamic Clu...](https://x.com/rsasaki0109/status/1971047011555459302) | 3 | 20 | 2025-09-25 |

</details>]]></content><author><name>rsasaki0109</name></author><category term="monthly-summary" /><summary type="html"><![CDATA[20件のポスト｜3D再構成・SLAM (10件), 自動運転 (4件), ロボティクス (2件), VLA・Foundation Model (1件), 論文紹介 (3件)｜1位: [NeurIPS 2025] Official Implementation of DINO-Foresight: Looking into the Futur...]]></summary></entry><entry><title type="html">2025年8月のポストまとめ</title><link href="https://rsasaki0109.github.io/rsasaki0109-tweet-summaries/2025/08/01/2025-08-summary/" rel="alternate" type="text/html" title="2025年8月のポストまとめ" /><published>2025-08-01T00:00:00+00:00</published><updated>2025-08-01T00:00:00+00:00</updated><id>https://rsasaki0109.github.io/rsasaki0109-tweet-summaries/2025/08/01/2025-08-summary</id><content type="html" xml:base="https://rsasaki0109.github.io/rsasaki0109-tweet-summaries/2025/08/01/2025-08-summary/"><![CDATA[<h2 id="-概要">📊 概要</h2>

<p><strong>19件</strong>のポスト（リプライ除く）を7カテゴリに分類しました。</p>

<table>
  <thead>
    <tr>
      <th style="text-align: left">カテゴリ</th>
      <th style="text-align: right">件数</th>
      <th style="text-align: right">割合</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td style="text-align: left">🏗️ 3D再構成・SLAM</td>
      <td style="text-align: right">8</td>
      <td style="text-align: right">██████████ 42%</td>
    </tr>
    <tr>
      <td style="text-align: left">🚗 自動運転</td>
      <td style="text-align: right">2</td>
      <td style="text-align: right">██ 11%</td>
    </tr>
    <tr>
      <td style="text-align: left">🤖 ロボティクス</td>
      <td style="text-align: right">2</td>
      <td style="text-align: right">██ 11%</td>
    </tr>
    <tr>
      <td style="text-align: left">🧠 VLA・Foundation Model</td>
      <td style="text-align: right">2</td>
      <td style="text-align: right">██ 11%</td>
    </tr>
    <tr>
      <td style="text-align: left">📄 論文紹介</td>
      <td style="text-align: right">2</td>
      <td style="text-align: right">██ 11%</td>
    </tr>
    <tr>
      <td style="text-align: left">🔧 OSS・ツール</td>
      <td style="text-align: right">1</td>
      <td style="text-align: right">█ 5%</td>
    </tr>
    <tr>
      <td style="text-align: left">💬 その他</td>
      <td style="text-align: right">2</td>
      <td style="text-align: right">██ 11%</td>
    </tr>
  </tbody>
</table>

<h2 id="-人気トップ3">🏆 人気トップ3</h2>

<h3 id="-1位">🥇 1位</h3>

<p><img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-08/1961216568706437322.jpg" alt="tweet image" style="max-width:100%; border-radius:8px; margin-bottom:12px;" /></p>

<table>
  <thead>
    <tr>
      <th style="text-align: left"> </th>
      <th style="text-align: left"> </th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td style="text-align: left"><strong>RT</strong></td>
      <td style="text-align: left">39</td>
    </tr>
    <tr>
      <td style="text-align: left"><strong>Like</strong></td>
      <td style="text-align: left">223</td>
    </tr>
    <tr>
      <td style="text-align: left"><strong>Views</strong></td>
      <td style="text-align: left">12000</td>
    </tr>
  </tbody>
</table>

<blockquote>
  <p>[ICRA’25] One Map to Find Them All: Real-time Open-Vocabulary Mapping for Zero-shot Multi-Object Navigation</p>

  <p>🔗 <a href="https://x.com/rsasaki0109/status/1961216568706437322">ポストを見る</a></p>
</blockquote>

<hr />

<h3 id="-2位">🥈 2位</h3>

<p><img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-08/1955464334349336685.jpg" alt="tweet image" style="max-width:100%; border-radius:8px; margin-bottom:12px;" /></p>

<table>
  <thead>
    <tr>
      <th style="text-align: left"> </th>
      <th style="text-align: left"> </th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td style="text-align: left"><strong>RT</strong></td>
      <td style="text-align: left">22</td>
    </tr>
    <tr>
      <td style="text-align: left"><strong>Like</strong></td>
      <td style="text-align: left">169</td>
    </tr>
    <tr>
      <td style="text-align: left"><strong>Views</strong></td>
      <td style="text-align: left">11000</td>
    </tr>
  </tbody>
</table>

<blockquote>
  <p>[CVPR 2025] Sparse Voxels Rasterization: Real-time High-fidelity Radiance Field Rendering</p>

  <p>🔗 <a href="https://x.com/rsasaki0109/status/1955464334349336685">ポストを見る</a></p>
</blockquote>

<hr />

<h3 id="-3位">🥉 3位</h3>

<p><img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-08/1960552739978797098.jpg" alt="tweet image" style="max-width:100%; border-radius:8px; margin-bottom:12px;" /></p>

<table>
  <thead>
    <tr>
      <th style="text-align: left"> </th>
      <th style="text-align: left"> </th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td style="text-align: left"><strong>RT</strong></td>
      <td style="text-align: left">21</td>
    </tr>
    <tr>
      <td style="text-align: left"><strong>Like</strong></td>
      <td style="text-align: left">148</td>
    </tr>
    <tr>
      <td style="text-align: left"><strong>Views</strong></td>
      <td style="text-align: left">10000</td>
    </tr>
  </tbody>
</table>

<blockquote>
  <p>[ICCV 2025] This is the official implementation of POMATO: Marrying Pointmap Matching with Temporal Motions for Dynamic 3D Reconstruction</p>

  <p>🔗 <a href="https://x.com/rsasaki0109/status/1960552739978797098">ポストを見る</a></p>
</blockquote>

<hr />

<h2 id="-カテゴリ別ハイライト">📂 カテゴリ別ハイライト</h2>

<h3 id="️-3d再構成slam">🏗️ 3D再構成・SLAM</h3>
<p><img src="/assets/images/cat-3d-slam.svg" alt="3D再構成・SLAM" class="align-left" style="width:40px; margin-right:10px;" /></p>

<p><strong>8件</strong>のポスト</p>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-08/1961216568706437322.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/1961216568706437322">[ICRA'25] One Map to Find Them All: Real-time Open-Vocabulary Mapping for Zero-shot Multi-Object Navigation</a> (♥ 223)
</div></div>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-08/1955464334349336685.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/1955464334349336685">[CVPR 2025] Sparse Voxels Rasterization: Real-time High-fidelity Radiance Field Rendering</a> (♥ 169)
</div></div>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-08/1960552739978797098.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/1960552739978797098">[ICCV 2025] This is the official implementation of POMATO: Marrying Pointmap Matching with Temporal Motions for Dynamic 3D Reconstruction</a> (♥ 148)
</div></div>

<ul>
  <li><a href="https://x.com/rsasaki0109/status/1958725818638536753">Awesome-Transformer-based-SLAM Paper Survey for Transformer-based SLAM</a> (♥ 136)</li>
</ul>
<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-08/1956575158744797341.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/1956575158744797341">ArtiScene: Language-Driven Artistic 3D Scene Generation Through Image Intermediary</a> (♥ 89)
</div></div>

<h3 id="-自動運転">🚗 自動運転</h3>
<p><img src="/assets/images/cat-autonomous.svg" alt="自動運転" class="align-left" style="width:40px; margin-right:10px;" /></p>

<p><strong>2件</strong>のポスト</p>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-08/1957245627685032028.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/1957245627685032028">CogniPlan: Uncertainty-Guided Path Planning with Conditional Generative Layout Prediction TL;DR CogniPlan is a path planning framework that leverages ...</a> (♥ 85)
</div></div>

<ul>
  <li><a href="https://x.com/rsasaki0109/status/1959780682273832968">autoware.privately-owned-vehicles An open-source autonomous highway pilot system for privately owned vehicles</a> (♥ 12)</li>
</ul>

<h3 id="-ロボティクス">🤖 ロボティクス</h3>
<p><img src="/assets/images/cat-robotics.svg" alt="ロボティクス" class="align-left" style="width:40px; margin-right:10px;" /></p>

<p><strong>2件</strong>のポスト</p>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-08/1959488543136317810.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/1959488543136317810">BotVIO: A Lightweight Transformer-Based Visual-Inertial Odometry for Robotics</a> (♥ 118)
</div></div>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-08/1961621214650405152.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/1961621214650405152">minecraft_ros2 ros2 minecraft mod</a> (♥ 44)
</div></div>

<h3 id="-vlafoundation-model">🧠 VLA・Foundation Model</h3>
<p><img src="/assets/images/cat-vla.svg" alt="VLA・Foundation Model" class="align-left" style="width:40px; margin-right:10px;" /></p>

<p><strong>2件</strong>のポスト</p>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-08/1956189106754511166.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/1956189106754511166">ICCV 2025 | TesserAct: Learning 4D Embodied World Models</a> (♥ 16)
</div></div>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-08/1960175376682111304.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/1960175376682111304">LLMDet: Learning Strong Open-Vocabulary Object Detectors under the Supervision of Large Language Models</a> (♥ 9)
</div></div>

<h3 id="-論文紹介">📄 論文紹介</h3>
<p><img src="/assets/images/cat-paper.svg" alt="論文紹介" class="align-left" style="width:40px; margin-right:10px;" /></p>

<p><strong>2件</strong>のポスト</p>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-08/1961987308065845724.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/1961987308065845724">STRIDE-QA: Visual Question Answering Dataset for Spatiotemporal Reasoning in Urban Driving Scenes</a> (♥ 75)
</div></div>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-08/1958363429728821282.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/1958363429728821282">Proteina: Scaling Flow-based Protein Structure Generative Models (ICLR 2025 Oral Paper) Proteina is a new large-scale flow-based protein backbone gene...</a> (♥ 11)
</div></div>

<h3 id="-ossツール">🔧 OSS・ツール</h3>
<p><img src="/assets/images/cat-oss.svg" alt="OSS・ツール" class="align-left" style="width:40px; margin-right:10px;" /></p>

<p><strong>1件</strong>のポスト</p>

<ul>
  <li><a href="https://x.com/rsasaki0109/status/1959930012003451055">Githubのデフォルトアイコンに謎の愛着があって他でも使ってたんだけど、抽象的で覚えづらいから他のアイコンに変えたいんだけど案が無い</a> (♥ 0)</li>
</ul>

<h3 id="-その他">💬 その他</h3>

<p><strong>2件</strong>のポスト</p>

<ul>
  <li><a href="https://x.com/rsasaki0109/status/1958483591568240775">自分の中の価値観を考えてたんだけど、下記になった ・win-win ・よく考え、すぐ行動</a> (♥ 13)</li>
  <li><a href="https://x.com/rsasaki0109/status/1961624809722581461">自分のXアカウントの背景を考えてるんだけど、近場でマルチパスが発生しやすい環境の写真でも取るか</a> (♥ 2)</li>
</ul>

<h2 id="-全ポスト一覧">📋 全ポスト一覧</h2>

<details>
<summary>全19件を表示</summary>

| # | ポスト | RT | ♥ | 日付 |
|--:|:-------|---:|--:|:-----|
| 1 | [[ICRA'25] One Map to Find Them All: Real-time Open-Vocabulary Mapping for Zero-s...](https://x.com/rsasaki0109/status/1961216568706437322) | 39 | 223 | 2025-08-28 |
| 2 | [[CVPR 2025] Sparse Voxels Rasterization: Real-time High-fidelity Radiance Field ...](https://x.com/rsasaki0109/status/1955464334349336685) | 22 | 169 | 2025-08-13 |
| 3 | [[ICCV 2025] This is the official implementation of POMATO: Marrying Pointmap Mat...](https://x.com/rsasaki0109/status/1960552739978797098) | 21 | 148 | 2025-08-27 |
| 4 | [Awesome-Transformer-based-SLAM Paper Survey for Transformer-based SLAM](https://x.com/rsasaki0109/status/1958725818638536753) | 22 | 136 | 2025-08-22 |
| 5 | [BotVIO: A Lightweight Transformer-Based Visual-Inertial Odometry for Robotics](https://x.com/rsasaki0109/status/1959488543136317810) | 22 | 118 | 2025-08-24 |
| 6 | [ArtiScene: Language-Driven Artistic 3D Scene Generation Through Image Intermedia...](https://x.com/rsasaki0109/status/1956575158744797341) | 8 | 89 | 2025-08-16 |
| 7 | [CogniPlan: Uncertainty-Guided Path Planning with Conditional Generative Layout P...](https://x.com/rsasaki0109/status/1957245627685032028) | 12 | 85 | 2025-08-18 |
| 8 | [[ICCV 2025] GLEAM: Learning Generalizable Exploration Policy for Active Mapping ...](https://x.com/rsasaki0109/status/1955826722345238539) | 11 | 79 | 2025-08-14 |
| 9 | [STRIDE-QA: Visual Question Answering Dataset for Spatiotemporal Reasoning in Urb...](https://x.com/rsasaki0109/status/1961987308065845724) | 11 | 75 | 2025-08-31 |
| 10 | [minecraft_ros2 ros2 minecraft mod](https://x.com/rsasaki0109/status/1961621214650405152) | 7 | 44 | 2025-08-30 |
| 11 | [MapBEVPrediction Accelerating Online Mapping and Behavior Prediction via Direct ...](https://x.com/rsasaki0109/status/1957612593369215412) | 3 | 26 | 2025-08-19 |
| 12 | [ICCV 2025 | TesserAct: Learning 4D Embodied World Models](https://x.com/rsasaki0109/status/1956189106754511166) | 2 | 16 | 2025-08-15 |
| 13 | [Decompositional Neural Scene Reconstruction with Generative Diffusion PriorCVPR ...](https://x.com/rsasaki0109/status/1960900149586354664) | 1 | 14 | 2025-08-28 |
| 14 | [自分の中の価値観を考えてたんだけど、下記になった ・win-win ・よく考え、すぐ行動](https://x.com/rsasaki0109/status/1958483591568240775) | 0 | 13 | 2025-08-21 |
| 15 | [autoware.privately-owned-vehicles An open-source autonomous highway pilot system...](https://x.com/rsasaki0109/status/1959780682273832968) | 3 | 12 | 2025-08-25 |
| 16 | [Proteina: Scaling Flow-based Protein Structure Generative Models (ICLR 2025 Oral...](https://x.com/rsasaki0109/status/1958363429728821282) | 2 | 11 | 2025-08-21 |
| 17 | [LLMDet: Learning Strong Open-Vocabulary Object Detectors under the Supervision o...](https://x.com/rsasaki0109/status/1960175376682111304) | 2 | 9 | 2025-08-26 |
| 18 | [自分のXアカウントの背景を考えてるんだけど、近場でマルチパスが発生しやすい環境の写真でも取るか](https://x.com/rsasaki0109/status/1961624809722581461) | 0 | 2 | 2025-08-30 |
| 19 | [Githubのデフォルトアイコンに謎の愛着があって他でも使ってたんだけど、抽象的で覚えづらいから他のアイコンに変えたいんだけど案が無い](https://x.com/rsasaki0109/status/1959930012003451055) | 0 | 0 | 2025-08-25 |

</details>]]></content><author><name>rsasaki0109</name></author><category term="monthly-summary" /><summary type="html"><![CDATA[19件のポスト｜3D再構成・SLAM (8件), 自動運転 (2件), ロボティクス (2件), VLA・Foundation Model (2件), 論文紹介 (2件), OSS・ツール (1件), その他 (2件)｜1位: [ICRA'25] One Map to Find Them All: Real-time Open-Vocabulary Mapping for Zero-s...]]></summary></entry><entry><title type="html">2025年7月のポストまとめ</title><link href="https://rsasaki0109.github.io/rsasaki0109-tweet-summaries/2025/07/01/2025-07-summary/" rel="alternate" type="text/html" title="2025年7月のポストまとめ" /><published>2025-07-01T00:00:00+00:00</published><updated>2025-07-01T00:00:00+00:00</updated><id>https://rsasaki0109.github.io/rsasaki0109-tweet-summaries/2025/07/01/2025-07-summary</id><content type="html" xml:base="https://rsasaki0109.github.io/rsasaki0109-tweet-summaries/2025/07/01/2025-07-summary/"><![CDATA[<h2 id="-概要">📊 概要</h2>

<p><strong>20件</strong>のポスト（リプライ除く）を6カテゴリに分類しました。</p>

<table>
  <thead>
    <tr>
      <th style="text-align: left">カテゴリ</th>
      <th style="text-align: right">件数</th>
      <th style="text-align: right">割合</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td style="text-align: left">🏗️ 3D再構成・SLAM</td>
      <td style="text-align: right">8</td>
      <td style="text-align: right">██████████ 40%</td>
    </tr>
    <tr>
      <td style="text-align: left">🚗 自動運転</td>
      <td style="text-align: right">3</td>
      <td style="text-align: right">████ 15%</td>
    </tr>
    <tr>
      <td style="text-align: left">🤖 ロボティクス</td>
      <td style="text-align: right">1</td>
      <td style="text-align: right">█ 5%</td>
    </tr>
    <tr>
      <td style="text-align: left">🧠 VLA・Foundation Model</td>
      <td style="text-align: right">3</td>
      <td style="text-align: right">████ 15%</td>
    </tr>
    <tr>
      <td style="text-align: left">📄 論文紹介</td>
      <td style="text-align: right">1</td>
      <td style="text-align: right">█ 5%</td>
    </tr>
    <tr>
      <td style="text-align: left">💬 その他</td>
      <td style="text-align: right">4</td>
      <td style="text-align: right">█████ 20%</td>
    </tr>
  </tbody>
</table>

<h2 id="-人気トップ3">🏆 人気トップ3</h2>

<h3 id="-1位">🥇 1位</h3>

<p><img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-07/1948578963153891727.jpg" alt="tweet image" style="max-width:100%; border-radius:8px; margin-bottom:12px;" /></p>

<table>
  <thead>
    <tr>
      <th style="text-align: left"> </th>
      <th style="text-align: left"> </th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td style="text-align: left"><strong>RT</strong></td>
      <td style="text-align: left">52</td>
    </tr>
    <tr>
      <td style="text-align: left"><strong>Like</strong></td>
      <td style="text-align: left">241</td>
    </tr>
    <tr>
      <td style="text-align: left"><strong>Views</strong></td>
      <td style="text-align: left">12000</td>
    </tr>
  </tbody>
</table>

<blockquote>
  <p>ROMAN(Robust Object Map Alignment Anywhere) a view-invariant global localization method that maps open-set objects and uses the geometry, shape, and semantics of objects to find the transformation between a current pose and previously created object …</p>

  <p>🔗 <a href="https://x.com/rsasaki0109/status/1948578963153891727">ポストを見る</a></p>
</blockquote>

<hr />

<h3 id="-2位">🥈 2位</h3>

<table>
  <thead>
    <tr>
      <th style="text-align: left"> </th>
      <th style="text-align: left"> </th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td style="text-align: left"><strong>RT</strong></td>
      <td style="text-align: left">39</td>
    </tr>
    <tr>
      <td style="text-align: left"><strong>Like</strong></td>
      <td style="text-align: left">205</td>
    </tr>
    <tr>
      <td style="text-align: left"><strong>Views</strong></td>
      <td style="text-align: left">16000</td>
    </tr>
  </tbody>
</table>

<blockquote>
  <p>VGGT-SLAM: Dense RGB SLAM Optimized on the SL(4) Manifold</p>

  <p>🔗 <a href="https://x.com/rsasaki0109/status/1946042251156566513">ポストを見る</a></p>
</blockquote>

<hr />

<h3 id="-3位">🥉 3位</h3>

<p><img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-07/1948216570834215138.jpg" alt="tweet image" style="max-width:100%; border-radius:8px; margin-bottom:12px;" /></p>

<table>
  <thead>
    <tr>
      <th style="text-align: left"> </th>
      <th style="text-align: left"> </th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td style="text-align: left"><strong>RT</strong></td>
      <td style="text-align: left">29</td>
    </tr>
    <tr>
      <td style="text-align: left"><strong>Like</strong></td>
      <td style="text-align: left">195</td>
    </tr>
    <tr>
      <td style="text-align: left"><strong>Views</strong></td>
      <td style="text-align: left">9500</td>
    </tr>
  </tbody>
</table>

<blockquote>
  <p>Dens3R: A Foundation Model for 3D Geometry Prediction &gt; Dens3R is a feed-forward visual foundation model that takes unposed images as input and outputs high-quality 3D pointmap with unified geometric dense prediction.</p>

  <p>🔗 <a href="https://x.com/rsasaki0109/status/1948216570834215138">ポストを見る</a></p>
</blockquote>

<hr />

<h2 id="-カテゴリ別ハイライト">📂 カテゴリ別ハイライト</h2>

<h3 id="️-3d再構成slam">🏗️ 3D再構成・SLAM</h3>
<p><img src="/assets/images/cat-3d-slam.svg" alt="3D再構成・SLAM" class="align-left" style="width:40px; margin-right:10px;" /></p>

<p><strong>8件</strong>のポスト</p>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-07/1948578963153891727.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/1948578963153891727">ROMAN(Robust Object Map Alignment Anywhere) a view-invariant global localization method that maps open-set objects and uses the geometry, shape, and s...</a> (♥ 241)
</div></div>

<ul>
  <li><a href="https://x.com/rsasaki0109/status/1946042251156566513">VGGT-SLAM: Dense RGB SLAM Optimized on the SL(4) Manifold</a> (♥ 205)</li>
</ul>
<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-07/1950753286987878773.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/1950753286987878773">GigaSLAM: Large-Scale Monocular SLAM with Hierarchical Gaussian Splats</a> (♥ 152)
</div></div>

<ul>
  <li><a href="https://x.com/rsasaki0109/status/1944592696313811408">[ICCV 2025] SpatialTrackerV2: 3D Point Tracking Made Easy &gt; SpatialTrackerV2 is the first unified, end-to-end 3D point tracking model which estimates …</a> (♥ 128)</li>
  <li><a href="https://x.com/rsasaki0109/status/1945679854029963468">awesome-3d-point-cloud-denoising A curated list of awesome 3D point cloud denoising papers</a> (♥ 93)</li>
</ul>

<h3 id="-自動運転">🚗 自動運転</h3>
<p><img src="/assets/images/cat-autonomous.svg" alt="自動運転" class="align-left" style="width:40px; margin-right:10px;" /></p>

<p><strong>3件</strong>のポスト</p>

<ul>
  <li><a href="https://x.com/rsasaki0109/status/1949679629897445501">Autoware Diffusion Planner &gt; It leverages the Diffusion Planner model, as described in the paper “Diffusion-Based Planning for Autonomous Driving with…</a> (♥ 152)</li>
</ul>
<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-07/1947491802170147142.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/1947491802170147142">[IEEE RAL'25 &amp; IROS'25] UA-MPC: Uncertainty-Aware Model Predictive Control for Motorized LiDAR Odometry</a> (♥ 121)
</div></div>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-07/1944957046572696039.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/1944957046572696039">BEV-LIO-LC BEV Image Assisted LiDAR-Inertial Odometry with Loop Closure(IROS 2025)</a> (♥ 7)
</div></div>

<h3 id="-ロボティクス">🤖 ロボティクス</h3>
<p><img src="/assets/images/cat-robotics.svg" alt="ロボティクス" class="align-left" style="width:40px; margin-right:10px;" /></p>

<p><strong>1件</strong>のポスト</p>

<ul>
  <li><a href="https://x.com/rsasaki0109/status/1950028521062928707">astroviz ROS2 package implementing a teleoperation interface using QT</a> (♥ 23)</li>
</ul>

<h3 id="-vlafoundation-model">🧠 VLA・Foundation Model</h3>
<p><img src="/assets/images/cat-vla.svg" alt="VLA・Foundation Model" class="align-left" style="width:40px; margin-right:10px;" /></p>

<p><strong>3件</strong>のポスト</p>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-07/1948216570834215138.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/1948216570834215138">Dens3R: A Foundation Model for 3D Geometry Prediction &gt; Dens3R is a feed-forward visual foundation model that takes unposed images as input and output...</a> (♥ 195)
</div></div>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-07/1945317480244842685.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/1945317480244842685">VFM-Registration LiDAR Registration with Visual Foundation Models</a> (♥ 38)
</div></div>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-07/1943098237570945457.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/1943098237570945457">[IROS'25 Oral] WMNav: Integrating Vision-Language Models into World Models for Object Goal Navigation</a> (♥ 6)
</div></div>

<h3 id="-論文紹介">📄 論文紹介</h3>
<p><img src="/assets/images/cat-paper.svg" alt="論文紹介" class="align-left" style="width:40px; margin-right:10px;" /></p>

<p><strong>1件</strong>のポスト</p>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-07/1947854184122093736.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/1947854184122093736">[CVPR 2025] MoFlow: One-Step Flow Matching for Human Trajectory Forecasting via Implicit Maximum Likelihood Estimation Distillation</a> (♥ 84)
</div></div>

<h3 id="-その他">💬 その他</h3>

<p><strong>4件</strong>のポスト</p>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-07/1949666132966670772.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/1949666132966670772">VGGT-Long: Chunk it, Loop it, Align it -- Pushing VGGT's Limits on Kilometer-scale Long RGB Sequences</a> (♥ 174)
</div></div>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-07/1950390903669166536.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/1950390903669166536">π³: Scalable Permutation-Equivariant Visual Geometry Learning π³ reconstructs visual geometry without a fixed reference view, achieving robust, state-...</a> (♥ 95)
</div></div>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-07/1946883070550053132.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/1946883070550053132">StreamVGGT Streaming 4D Visual Geometry Transformer</a> (♥ 60)
</div></div>

<ul>
  <li><a href="https://x.com/rsasaki0109/status/1944561569423319369">久しぶりに日本語記事書くモチベが出てきたんだけど、どこで書こうかな</a> (♥ 2)</li>
</ul>

<h2 id="-全ポスト一覧">📋 全ポスト一覧</h2>

<details>
<summary>全20件を表示</summary>

| # | ポスト | RT | ♥ | 日付 |
|--:|:-------|---:|--:|:-----|
| 1 | [ROMAN(Robust Object Map Alignment Anywhere) a view-invariant global localization...](https://x.com/rsasaki0109/status/1948578963153891727) | 52 | 241 | 2025-07-25 |
| 2 | [VGGT-SLAM: Dense RGB SLAM Optimized on the SL(4) Manifold](https://x.com/rsasaki0109/status/1946042251156566513) | 39 | 205 | 2025-07-18 |
| 3 | [Dens3R: A Foundation Model for 3D Geometry Prediction &gt; Dens3R is a feed-forward...](https://x.com/rsasaki0109/status/1948216570834215138) | 29 | 195 | 2025-07-24 |
| 4 | [VGGT-Long: Chunk it, Loop it, Align it -- Pushing VGGT's Limits on Kilometer-sca...](https://x.com/rsasaki0109/status/1949666132966670772) | 20 | 174 | 2025-07-28 |
| 5 | [GigaSLAM: Large-Scale Monocular SLAM with Hierarchical Gaussian Splats](https://x.com/rsasaki0109/status/1950753286987878773) | 21 | 152 | 2025-07-31 |
| 6 | [Autoware Diffusion Planner &gt; It leverages the Diffusion Planner model, as descri...](https://x.com/rsasaki0109/status/1949679629897445501) | 22 | 152 | 2025-07-28 |
| 7 | [[ICCV 2025] SpatialTrackerV2: 3D Point Tracking Made Easy &gt; SpatialTrackerV2 is ...](https://x.com/rsasaki0109/status/1944592696313811408) | 16 | 128 | 2025-07-14 |
| 8 | [[IEEE RAL'25 &amp; IROS'25] UA-MPC: Uncertainty-Aware Model Predictive Control for M...](https://x.com/rsasaki0109/status/1947491802170147142) | 19 | 121 | 2025-07-22 |
| 9 | [π³: Scalable Permutation-Equivariant Visual Geometry Learning π³ reconstructs vi...](https://x.com/rsasaki0109/status/1950390903669166536) | 19 | 95 | 2025-07-30 |
| 10 | [awesome-3d-point-cloud-denoising A curated list of awesome 3D point cloud denois...](https://x.com/rsasaki0109/status/1945679854029963468) | 10 | 93 | 2025-07-17 |
| 11 | [[CVPR 2025] PanoGS: Gaussian-based Panoptic Segmentation for 3D Open Vocabulary ...](https://x.com/rsasaki0109/status/1942418370655105364) | 13 | 89 | 2025-07-08 |
| 12 | [[CVPR 2025] MoFlow: One-Step Flow Matching for Human Trajectory Forecasting via ...](https://x.com/rsasaki0109/status/1947854184122093736) | 10 | 84 | 2025-07-23 |
| 13 | [StreamVGGT Streaming 4D Visual Geometry Transformer](https://x.com/rsasaki0109/status/1946883070550053132) | 11 | 60 | 2025-07-20 |
| 14 | [[ICRA2025] RE-TRIP : Reflectivity Instance Augmented Triangle Descriptor for 3D ...](https://x.com/rsasaki0109/status/1942719604528406841) | 8 | 43 | 2025-07-08 |
| 15 | [VFM-Registration LiDAR Registration with Visual Foundation Models](https://x.com/rsasaki0109/status/1945317480244842685) | 3 | 38 | 2025-07-16 |
| 16 | [vwio_eskf ESKF Algorithm for Muti-Sensor Fusion(Wheel Odometry, IMU, Visual Odom...](https://x.com/rsasaki0109/status/1945679864738017553) | 5 | 29 | 2025-07-17 |
| 17 | [astroviz ROS2 package implementing a teleoperation interface using QT](https://x.com/rsasaki0109/status/1950028521062928707) | 2 | 23 | 2025-07-29 |
| 18 | [BEV-LIO-LC BEV Image Assisted LiDAR-Inertial Odometry with Loop Closure(IROS 202...](https://x.com/rsasaki0109/status/1944957046572696039) | 0 | 7 | 2025-07-15 |
| 19 | [[IROS'25 Oral] WMNav: Integrating Vision-Language Models into World Models for O...](https://x.com/rsasaki0109/status/1943098237570945457) | 2 | 6 | 2025-07-10 |
| 20 | [久しぶりに日本語記事書くモチベが出てきたんだけど、どこで書こうかな](https://x.com/rsasaki0109/status/1944561569423319369) | 0 | 2 | 2025-07-14 |

</details>]]></content><author><name>rsasaki0109</name></author><category term="monthly-summary" /><summary type="html"><![CDATA[20件のポスト｜3D再構成・SLAM (8件), 自動運転 (3件), ロボティクス (1件), VLA・Foundation Model (3件), 論文紹介 (1件), その他 (4件)｜1位: ROMAN(Robust Object Map Alignment Anywhere) a view-invariant global localization...]]></summary></entry><entry><title type="html">2025年6月のポストまとめ</title><link href="https://rsasaki0109.github.io/rsasaki0109-tweet-summaries/2025/06/01/2025-06-summary/" rel="alternate" type="text/html" title="2025年6月のポストまとめ" /><published>2025-06-01T00:00:00+00:00</published><updated>2025-06-01T00:00:00+00:00</updated><id>https://rsasaki0109.github.io/rsasaki0109-tweet-summaries/2025/06/01/2025-06-summary</id><content type="html" xml:base="https://rsasaki0109.github.io/rsasaki0109-tweet-summaries/2025/06/01/2025-06-summary/"><![CDATA[<h2 id="-概要">📊 概要</h2>

<p><strong>15件</strong>のポスト（リプライ除く）を4カテゴリに分類しました。</p>

<table>
  <thead>
    <tr>
      <th style="text-align: left">カテゴリ</th>
      <th style="text-align: right">件数</th>
      <th style="text-align: right">割合</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td style="text-align: left">🏗️ 3D再構成・SLAM</td>
      <td style="text-align: right">7</td>
      <td style="text-align: right">██████████ 47%</td>
    </tr>
    <tr>
      <td style="text-align: left">🤖 ロボティクス</td>
      <td style="text-align: right">3</td>
      <td style="text-align: right">████ 20%</td>
    </tr>
    <tr>
      <td style="text-align: left">🧠 VLA・Foundation Model</td>
      <td style="text-align: right">2</td>
      <td style="text-align: right">███ 13%</td>
    </tr>
    <tr>
      <td style="text-align: left">💬 その他</td>
      <td style="text-align: right">3</td>
      <td style="text-align: right">████ 20%</td>
    </tr>
  </tbody>
</table>

<h2 id="-人気トップ3">🏆 人気トップ3</h2>

<h3 id="-1位">🥇 1位</h3>

<table>
  <thead>
    <tr>
      <th style="text-align: left"> </th>
      <th style="text-align: left"> </th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td style="text-align: left"><strong>RT</strong></td>
      <td style="text-align: left">50</td>
    </tr>
    <tr>
      <td style="text-align: left"><strong>Like</strong></td>
      <td style="text-align: left">304</td>
    </tr>
    <tr>
      <td style="text-align: left"><strong>Views</strong></td>
      <td style="text-align: left">14000</td>
    </tr>
  </tbody>
</table>

<blockquote>
  <p>NVlabs/PyCuVSLAM Highly accurate and efficient VSLAM system for Python &gt; PyCuVSLAM is the official Python wrapper around the cuVSLAM visual-inertial SLAM (Simultaneous Localization And Mapping) software package developed by NVIDIA. It is a highly acc…</p>

  <p>🔗 <a href="https://x.com/rsasaki0109/status/1938812028199833691">ポストを見る</a></p>
</blockquote>

<hr />

<h3 id="-2位">🥈 2位</h3>

<p><img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-06/1932958868092658103.jpg" alt="tweet image" style="max-width:100%; border-radius:8px; margin-bottom:12px;" /></p>

<table>
  <thead>
    <tr>
      <th style="text-align: left"> </th>
      <th style="text-align: left"> </th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td style="text-align: left"><strong>RT</strong></td>
      <td style="text-align: left">49</td>
    </tr>
    <tr>
      <td style="text-align: left"><strong>Like</strong></td>
      <td style="text-align: left">256</td>
    </tr>
    <tr>
      <td style="text-align: left"><strong>Views</strong></td>
      <td style="text-align: left">14000</td>
    </tr>
  </tbody>
</table>

<blockquote>
  <p>ICRA2025: OpenGS-SLAM: Open-Set Dense Semantic SLAM with 3D Gaussian Splatting for Object-Level Scene Understanding</p>

  <p>🔗 <a href="https://x.com/rsasaki0109/status/1932958868092658103">ポストを見る</a></p>
</blockquote>

<hr />

<h3 id="-3位">🥉 3位</h3>

<table>
  <thead>
    <tr>
      <th style="text-align: left"> </th>
      <th style="text-align: left"> </th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td style="text-align: left"><strong>RT</strong></td>
      <td style="text-align: left">29</td>
    </tr>
    <tr>
      <td style="text-align: left"><strong>Like</strong></td>
      <td style="text-align: left">166</td>
    </tr>
    <tr>
      <td style="text-align: left"><strong>Views</strong></td>
      <td style="text-align: left">8400</td>
    </tr>
  </tbody>
</table>

<blockquote>
  <p>Splat-LOAM 2D Gaussian Splatting based LiDAR Odometry And Mapping</p>

  <p>🔗 <a href="https://x.com/rsasaki0109/status/1937644436311441650">ポストを見る</a></p>
</blockquote>

<hr />

<h2 id="-カテゴリ別ハイライト">📂 カテゴリ別ハイライト</h2>

<h3 id="️-3d再構成slam">🏗️ 3D再構成・SLAM</h3>
<p><img src="/assets/images/cat-3d-slam.svg" alt="3D再構成・SLAM" class="align-left" style="width:40px; margin-right:10px;" /></p>

<p><strong>7件</strong>のポスト</p>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-06/1932958868092658103.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/1932958868092658103">ICRA2025: OpenGS-SLAM: Open-Set Dense Semantic SLAM with 3D Gaussian Splatting for Object-Level Scene Understanding</a> (♥ 256)
</div></div>

<ul>
  <li><a href="https://x.com/rsasaki0109/status/1937644436311441650">Splat-LOAM 2D Gaussian Splatting based LiDAR Odometry And Mapping</a> (♥ 166)</li>
</ul>
<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-06/1939530571182706916.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/1939530571182706916">DualMap: Online Open-Vocabulary Semantic Mapping for Natural Language Navigation in Dynamic Changing Scenes &gt;DualMap is an online open-vocabulary mapp...</a> (♥ 123)
</div></div>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-06/1937292128419479986.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/1937292128419479986">Test3R: Learning to Reconstruct 3D at Test Time</a> (♥ 84)
</div></div>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-06/1938537793577242890.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/1938537793577242890">MISO: Multiresolution Submap Optimization for Efficient Globally Consistent Neural Implicit Reconstruction(RSS'25)</a> (♥ 77)
</div></div>

<h3 id="-ロボティクス">🤖 ロボティクス</h3>
<p><img src="/assets/images/cat-robotics.svg" alt="ロボティクス" class="align-left" style="width:40px; margin-right:10px;" /></p>

<p><strong>3件</strong>のポスト</p>

<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-06/1935848951741399489.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/1935848951741399489">The CU-Multi Dataset CU-MULTI is a multi-robot dataset that contains two large-scale outdoor sequences collected by a single ground robot on The Unive...</a> (♥ 101)
</div></div>

<ul>
  <li><a href="https://x.com/rsasaki0109/status/1939076949605024178">[RSS 2025] Learning Getting-Up Policies for Real-World Humanoid Robots</a> (♥ 20)</li>
</ul>
<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-06/1932188011330167045.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/1932188011330167045">arp Autoregressive Policy for Robot Learning (RA-L 2025) Action Sequence Learning for Robotic Manipulation</a> (♥ 6)
</div></div>

<h3 id="-vlafoundation-model">🧠 VLA・Foundation Model</h3>
<p><img src="/assets/images/cat-vla.svg" alt="VLA・Foundation Model" class="align-left" style="width:40px; margin-right:10px;" /></p>

<p><strong>2件</strong>のポスト</p>

<ul>
  <li><a href="https://x.com/rsasaki0109/status/1938812028199833691">NVlabs/PyCuVSLAM Highly accurate and efficient VSLAM system for Python &gt; PyCuVSLAM is the official Python wrapper around the cuVSLAM visual-inertial S…</a> (♥ 304)</li>
</ul>
<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-06/1931844638941524332.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/1931844638941524332">Fast-in-Slow: A Dual-System Foundation Model Unifying Fast Manipulation within Slow Reasoning</a> (♥ 11)
</div></div>

<h3 id="-その他">💬 その他</h3>

<p><strong>3件</strong>のポスト</p>

<ul>
  <li><a href="https://x.com/rsasaki0109/status/1934270523149873577">パーティクルフィルタの可能性について考えてたんだけど、計算量爆発してダメだった</a> (♥ 7)</li>
</ul>
<div style="display:flex; gap:12px; margin-bottom:16px; align-items:flex-start;">
<img src="/rsasaki0109-tweet-summaries/assets/images/tweets/2025-06/1939483627987014080.jpg" style="width:120px; border-radius:6px; flex-shrink:0;" />
<div>
<a href="https://x.com/rsasaki0109/status/1939483627987014080">お人よしな経営初心者あっておもろい いっぱい失敗して成長して欲しい</a> (♥ 3)
</div></div>

<ul>
  <li><a href="https://x.com/rsasaki0109/status/1939231867192123446">万博行きたくなってきたな 10月13日までか</a> (♥ 3)</li>
</ul>

<h2 id="-全ポスト一覧">📋 全ポスト一覧</h2>

<details>
<summary>全15件を表示</summary>

| # | ポスト | RT | ♥ | 日付 |
|--:|:-------|---:|--:|:-----|
| 1 | [NVlabs/PyCuVSLAM Highly accurate and efficient VSLAM system for Python &gt; PyCuVSL...](https://x.com/rsasaki0109/status/1938812028199833691) | 50 | 304 | 2025-06-28 |
| 2 | [ICRA2025: OpenGS-SLAM: Open-Set Dense Semantic SLAM with 3D Gaussian Splatting f...](https://x.com/rsasaki0109/status/1932958868092658103) | 49 | 256 | 2025-06-12 |
| 3 | [Splat-LOAM 2D Gaussian Splatting based LiDAR Odometry And Mapping](https://x.com/rsasaki0109/status/1937644436311441650) | 29 | 166 | 2025-06-24 |
| 4 | [DualMap: Online Open-Vocabulary Semantic Mapping for Natural Language Navigation...](https://x.com/rsasaki0109/status/1939530571182706916) | 17 | 123 | 2025-06-30 |
| 5 | [The CU-Multi Dataset CU-MULTI is a multi-robot dataset that contains two large-s...](https://x.com/rsasaki0109/status/1935848951741399489) | 17 | 101 | 2025-06-19 |
| 6 | [Test3R: Learning to Reconstruct 3D at Test Time](https://x.com/rsasaki0109/status/1937292128419479986) | 11 | 84 | 2025-06-23 |
| 7 | [MISO: Multiresolution Submap Optimization for Efficient Globally Consistent Neur...](https://x.com/rsasaki0109/status/1938537793577242890) | 16 | 77 | 2025-06-27 |
| 8 | [paco Parametric completion for polygonal surface reconstruction [CVPR 2025]](https://x.com/rsasaki0109/status/1934581734181646713) | 4 | 21 | 2025-06-16 |
| 9 | [[RSS 2025] Learning Getting-Up Policies for Real-World Humanoid Robots](https://x.com/rsasaki0109/status/1939076949605024178) | 1 | 20 | 2025-06-28 |
| 10 | [3D LiDAR本を献本いただきました！ よく使ってはいますがLiDARの基本に関してはまだまだ理解が浅いので、こちらでしっかり勉強します！ @keiocsg ...](https://x.com/rsasaki0109/status/1936801715241746584) | 3 | 17 | 2025-06-22 |
| 11 | [Fast-in-Slow: A Dual-System Foundation Model Unifying Fast Manipulation within S...](https://x.com/rsasaki0109/status/1931844638941524332) | 1 | 11 | 2025-06-08 |
| 12 | [パーティクルフィルタの可能性について考えてたんだけど、計算量爆発してダメだった](https://x.com/rsasaki0109/status/1934270523149873577) | 0 | 7 | 2025-06-15 |
| 13 | [arp Autoregressive Policy for Robot Learning (RA-L 2025) Action Sequence Learnin...](https://x.com/rsasaki0109/status/1932188011330167045) | 0 | 6 | 2025-06-09 |
| 14 | [お人よしな経営初心者あっておもろい いっぱい失敗して成長して欲しい](https://x.com/rsasaki0109/status/1939483627987014080) | 0 | 3 | 2025-06-30 |
| 15 | [万博行きたくなってきたな 10月13日までか](https://x.com/rsasaki0109/status/1939231867192123446) | 0 | 3 | 2025-06-29 |

</details>]]></content><author><name>rsasaki0109</name></author><category term="monthly-summary" /><summary type="html"><![CDATA[15件のポスト｜3D再構成・SLAM (7件), ロボティクス (3件), VLA・Foundation Model (2件), その他 (3件)｜1位: NVlabs/PyCuVSLAM Highly accurate and efficient VSLAM system for Python > PyCuVSL...]]></summary></entry></feed>