Hey everyone!

I’m excited to introduce Reitti, a location tracking and analysis application designed to help you gain insights about your movement patterns and significant places—all while keeping your data private on your own server.

Core Capabilities:

  • Visit Tracking: Automatically recognizes and categorizes the places where you spend time, using customizable detection algorithms
  • Trip Analysis: Analyzes your movements between locations to understand how you travel whether by walking, cycling, or driving
  • Interactive Timeline: Visualizes all your past activities on an interactive timeline with map and list views that show visit duration, transport method, and distance traveled

Photo Integration:

  • Connect your self-hosted Immich photo server to seamlessly display photos taken at specific locations right within Reitti’s timeline. The interactive photo viewer lets you browse galleries for each place.

Data Import Options:

  • Multiple Formats Supported: Reitti can import existing location data from GPX, GeoJSON, and Google Takeout (JSON) backups
  • (Near) Real-time Updates: Automatically receive location info via mobile apps like OwnTracks, GPSLogger or our REST API

Customization:

  • Multi-geocoding Services: Configurable options to convert coordinates to human-readable addresses using providers like Nominatim
  • User Profiles: Customize individual display names, password management, and API token security under your own control

Self-hosting:

  • Reitti is designed to be deployed on your own infrastructure using Docker containers. We provide configuration templates to set up linked services like PostgreSQL, RabbitMQ and Redis that keep all your location data private.

Reitti is still early in development but has already developed extensive capabilities. I’d love to hear your feedback and answer any questions to tailor Reitti to meet the community’s needs.

Hope this sparks some interest!

Daniel

  • ada@piefed.blahaj.zone
    link
    fedilink
    English
    arrow-up
    1
    ·
    15 hours ago

    I was also trying to set up GPSLogger whilst it was crunching through the backlog, and I manually transferred a file from that app before I had autologging configured. Not sure if that could have done it?

    The times don’t overlap, as the takeout file is only up until 2023

    • danielgraf@discuss.tchncs.deOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      14 hours ago

      Thanks for getting back to me. I can look into it. I don’t think it’s connected, but you never know.

      The data goes the same way, first to RabbitMQ and then the database. So it shouldn’t matter, it’s just another message or a bunch of them in the queue.

      • ada@piefed.blahaj.zone
        link
        fedilink
        English
        arrow-up
        1
        ·
        12 hours ago

        Ok, so it may not be frozen. The numbers in the queue seem to imply it is, however, timelines and places are slowly filling out in my history. A couple of dates I had looked at previously were showing me tracklogs for the day, but not timeline information, and now, they’re showing timelines for the day

        • danielgraf@discuss.tchncs.deOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          12 hours ago

          That’s good, but I still question why it is so slow. If you receive these timeout exceptions more often, at some point the data will cease to be analyzed.

          I just re-tested it with multiple concurrent imports into a clean DB, and the stay-detection-queue completed in 10 minutes. It’s not normal for it to take that long for you. The component that should take the most time is actually the merge-visit-queue because this creates a lot of stress for the DB. This test was conducted on my laptop, equipped with an AMD Ryzen™ 7 PRO 8840U and 32GB of RAM.

          • ada@piefed.blahaj.zone
            link
            fedilink
            English
            arrow-up
            1
            ·
            11 hours ago

            Since I last commented, the queue has jumped from about 9000 outstanding items, to 15,000 outstanding items, and it appears that I have timelines for a large amount of my history now.

            However, the estimated time is still slowly creeping up (though only by a minute or two, despite adding 6000 more items to the queue).

            I haven’t uploaded anything manually that might have triggered the change in queue size.

            Is there any external calls made during processing this queue that might be adding latency?

            tl;dr - something is definitely happening

            • danielgraf@discuss.tchncs.deOP
              link
              fedilink
              English
              arrow-up
              1
              ·
              11 hours ago

              This process is not triggered by any external events.

              Every ten minutes, an internal background job activates. Its function is to scan the database for any RawLocationPoints that haven’t been processed yet. These unprocessed points are then batched into groups of 100, and each batch is sent as a message to be consumed by the stay-detection-queue. This process naturally adds to the workload of that queue.

              However, if no new location data is being ingested, once all RawLocationPoints have been processed and their respective flags set, the stay-detection-queue should eventually clear, and the system should return to a idle state. I’m still puzzled as to why this initial queue (stay-detection-queue) is exhibiting such slow performance for you, as it’s typically one of the faster steps.