3/05/2026

Releasing 3D LiDAR Scans of Machu Picchu, Sacsayhuaman, and Other Ancient Sites

November 2024. I'm standing at 2,430 meters on a ridge above the Urubamba Valley, staring at Machu Picchu. Like millions before me, I'm blown away. Unlike most of them, I have an iPhone with a LiDAR sensor in my backpack.

I'd been annoyed for a while by how hard it is to find downloadable 3D data of major archaeological sites. You get blurry photogrammetry on Sketchfab, paywalled academic datasets, or just nothing. Try finding a 3D model of the Sacsayhuaman walls. Go ahead, I'll wait. There isn't one.

So I made one.

3D LiDAR scan of Machu Picchu walls
Machu Picchu, scan #1. iPhone LiDAR, November 2024.

What I did

During a trip across Peru, I scanned 13 archaeological sites with the LiDAR sensors on an iPhone. Machu Picchu alone got 15 separate scans: walls, terraces, the astronomical observatory, individual stone joints where you can see exactly how blocks interlock without mortar. I captured the polygonal walls of Sacsayhuaman (blocks up to 200 tonnes, and no, I still don't understand how they moved them), the concentric terraces of Moray, what's left of Qorikancha's walls after the Spanish stripped the gold off, the still-functioning aqueducts at Tipon. I also scanned the Sitamarhi Caves in Bihar, India.

47 scans. 14 sites. 2 countries. 17 GB of raw 3D data.

3D LiDAR scan of Sacsayhuaman megalithic walls
Sacsayhuaman. Blocks up to 200 tonnes, fitted without mortar.

Every scan is interactive. You rotate and zoom in your browser, no app needed.

The sites

Peru, 13 sites. Machu Picchu has 15 scans covering walls, terraces, and the Intihuatana solar clock. Sacsayhuaman has its megalithic zigzag walls stretching 600 meters. Ollantaytambo got 8 scans across the fortress, the Temple of the Sun monoliths, and the water channels. Qorikancha is the Temple of the Sun in Cusco. Chinchero is the royal estate of Tupac Inca Yupanqui, sitting at 3,762 meters. Moray has those eerie circular terraces with 15°C temperature swings between levels. Tipon is pure hydraulic engineering, aqueducts that still carry water today. Q'enqo has zigzag channels and subterranean chambers carved into raw limestone. Puka Pukara, the Red Fortress, was a checkpoint on the road to Antisuyo. Amaru Punku is the Gate of the Serpent near Ollantaytambo. The Bath of the Ñusta is a ceremonial fountain where water channels are cut into a single rock face. I also captured some street-level Inca walls in Cusco itself, and the Intihuatana observatory stone at Machu Picchu as a separate scan.

3D LiDAR scan of Ollantaytambo fortress
Ollantaytambo, scan #3. Temple of the Sun monoliths.
3D LiDAR scan of Moray circular terraces
Moray. Concentric terraces with 15°C temperature differentials between levels.
3D LiDAR scan of Q'enqo carved limestone
Q'enqo. Zigzag channels carved into raw limestone.

India, 1 site: the Sitamarhi Caves in Bihar, rock-cut caves tied to the Ramayana.

3D LiDAR scan of Sitamarhi Caves, Bihar
Sitamarhi Caves, Bihar, India.

Why an iPhone?

Professional terrestrial LiDAR scanners cost $50K+ and take hours to set up per scan. The iPhone sensor does it in minutes. The resolution is lower, obviously. But I could scan a wall section between tour groups, in rain, while hiking between sites. Speed and portability won over precision.

3D LiDAR scan of Qorikancha Temple of the Sun
Qorikancha, Temple of the Sun. The Inca stonework underneath the Spanish convent.
3D LiDAR scan of Tipon aqueducts
Tipon. Aqueducts built 600 years ago, still carrying water.

The tech stack

Each scan is exported as GLB for browser viewing and USDZ for AR on iOS. Source files are available in STL, XYZ point clouds, PLY, DXF, DAE, FBX, and OBJ. 204 files total.

The viewer is a custom Three.js fullscreen viewer I built. No iframe, no third-party embed. DRACO-compressed GLB files load straight in the browser. The 17 GB of assets sit on Cloudflare R2 behind a CDN. The site runs on Netlify. Everything is static. No backend, no login. Click a scan, it loads.

Download everything

All files are free. Every scan card has download buttons for GLB, USDZ, STL, XYZ, PLY and more.

Licensed under Creative Commons BY-NC-SA 4.0. Credit me, don't sell them, share derivatives under the same terms.

What's next

Easter Island, Bolivia and Turkey are on the list. We'll see when that happens.

fgribreau.com/research/3d-scans.html

12/19/2025

Releasing mcp-matomo - from frustration to Open Source: Building MCP Matomo in 30 Minutes

It’s Friday, December 19th. The coffee roasted spelt is still hot, and Hook0 team just pushed the "deploy" button.

We just shipped a massive update: a brand new design and completely overhauled documentation for Hook0, the open-source webhook sending infrastructure I co-founded. It feels good. But as the dust settled, I realized we needed to implement analytics on the new documentation pages.

Naturally, we reached for Matomo. It’s privacy-focused, ethical, and solid. But as I was setting it up, a thought struck me:

"Why am I still navigating dashboards, setting date ranges, and clicking through menus in 2025? Why can't I just talk to my data?"

I immediately started looking for a way to connect Matomo to Claude via the Model Context Protocol (MCP). I found exactly one solution.

The problem? It required routing my data through a third-party API and I had to ask them politely through a contact form to get access.

That was a hard No for me.

It goes against my core engineering values. I need future-proof solutions, not a solution that depends on an external black box for simple logic. I want autonomy. I want self-hosted reliability (or at least the ability to do so when needed, that's what we belive at Hook0 and Cloud-IAM). I want my data to stay mine.

So, I checked the clock. I opened my IDE along with some IA agents. And I decided to fix it myself.

I chose Rust for performance and reliability. Exactly 30 minutes later, I had a fully functional MCP server running locally. No external APIs, no subscriptions, just raw, direct access to the Matomo instance.

I’m open-sourcing it today because I believe analytics should be accessible and private.

Get the code on GitHub: FGRibreau/mcp-matomo (don't forget to star it!)

What Can You Do With It?

Instead of clicking through the UI, you can now simply ask Claude questions like:

"Show me the top 10 pages by visits this week, broken down by device type."

mcp-matomo connects to your instance, introspects the API, executes the necessary calls, and presents the answer. It covers almost everything Matomo tracks:

  • Traffic: Visits, unique visitors, bounce rates.
  • Acquisition: Referrers, search engines, campaigns.
  • Behavior: Entry pages, downloads, outlinks.
  • Tech & Geo: Devices, screens, countries.

What's Next?

Now that the tool is live, my next step is to deploy this across the ecosystem of companies I've created or co-founded. We need to democratize access to data for our teams without forcing them to become analytics experts.

I'll be rolling this out at:

  • Cloud-IAM: To track adoption of our managed Keycloak solution (ISO 27001 certified).
  • Netir: To better understand how freelancers, companies and mentors interact on our marketplace.
  • Natalia: To analyze how users engage with our unified AI ecosystem across Voice, WhatsApp, and Transcripts.

If you share the value of autonomy and want to talk to your data without a middleman, give it a try.

Feedback and PRs are welcome on GitHub!

10/24/2025

Releasing n8n-nodes-signal-cli - How I automatically transcribe Signal voice messages with my own n8n node extension

I was fed up with receiving 5-minute voice messages on Signal just to find out someone was asking if I'm free next Tuesday. No automatic transcription, no way to quickly scan the content. You have to listen to the whole thing, often at the worst possible moment.

So I built n8n-nodes-signal-cli - an n8n extension that integrates with Signal CLI to automatically transcribe voice messages and send back a concise summary directly in the conversation.

The problem

Voice messages suck when:

  • People take 5 minutes to say what could be written in 3 lines
  • You're in a meeting and can't listen
  • You need to find specific information buried in a long monologue
  • Signal doesn't offer automatic transcription

The solution

My n8n extension adds two nodes:

  • Signal CLI Trigger: fires when receiving new Signal messages
  • Signal CLI: sends messages back to Signal

My workflow:

Signal message received 
    → Filter voice messages only
    → Download audio file
    → Transcribe with local Whisper (or equivalent)
    → Summarize with LLM
    → Send transcript back to Signal conversation
    

Result: That 3-minute rambling voice message becomes:

📝 Transcript: "Are you available next Tuesday at 2 PM for a meeting?"

My self-hosted n8n runs on my NAS locked behind a Zero Trust Network thanks to France nuage

Installation

Prerequisites:

  • n8n installed and running
  • signal-cli configured with your phone number

Install the extension:

npm install n8n-nodes-signal-cli

Setup:

  1. Add Signal CLI credentials in n8n
  2. Create new workflow
  3. Add "Signal CLI Trigger" node
  4. Build your processing logic
  5. Use "Signal CLI" node to send messages back

Beyond transcription

This opens up tons of automation possibilities:

  • Reminder bot - schedule reminders via Signal
  • Auto-archiving - save important messages automatically
  • Instant translation - translate messages on the fly
  • Smart notifications - filter and route messages by content
  • Service integration - connect Signal to Slack, Discord, etc.

Results

Since using this:

  • I never miss important info hidden in long voice messages
  • I can "read" voice messages anywhere, anytime
  • People actually appreciate seeing their rambling transcribed concisely
  • Saved tons of time not listening to minutes of audio

Check it out on GitHub. Star it if you find it useful. Report bugs. Send PRs. The usual drill.

»
 
 
Made with on a hot august night from an airplane the 19th of March 2017.