Incidents | Telematics inSights Incidents reported on status page for Telematics inSights https://status.telematicsinsights.ai/ en Telemetry Engine recovered https://status.telematicsinsights.ai/ Thu, 08 Jan 2026 02:17:32 +0000 https://status.telematicsinsights.ai/#f3d37e40fc940dd8c17afc63044b956260f4a3a1092fefdb1922ad8cd32acbfc Telemetry Engine recovered Flespi availability impacted https://status.telematicsinsights.ai/incident/800682 Thu, 08 Jan 2026 02:16:00 -0000 https://status.telematicsinsights.ai/incident/800682#4324003940d55c9a02734f90e018ea4896316ab12f902c8a6791a0f1b2ea6229 Summary On January 8th, between 02:12 and 02:16 UTC, we experienced a brief service disruption affecting requests to the Flespi EU datacenter. During this window, the platform reported errors such as: “Failed to perform https://flespi.io GET request” This type of error typically indicates either a datacenter network uplink issue or temporary maintenance on the provider side. In this case, the incident was caused by a network outage at Flespi’s datacenter network provider. The issue was external to our infrastructure and was resolved by the provider shortly after detection. Root Cause A network outage at Flespi’s EU datacenter network provider caused temporary unavailability of the Flespi API endpoints. Impact Short-term inability to perform API requests to Flespi EU endpoints Some platform functionality depending on live Flespi data was temporarily unavailable No data loss occurred Total duration: ~4 minutes Resolution The issue was mitigated by Flespi’s network provider Services recovered automatically once connectivity was restored We verified platform stability and normal operation after recovery Current Status ✅ All systems are fully operational ✅ No further issues observed We apologize for the brief disruption and appreciate your understanding. If you have any questions or notice anything unusual, please contact us at support@telematicsinsights.ai Telemetry Engine went down https://status.telematicsinsights.ai/ Thu, 08 Jan 2026 02:12:55 +0000 https://status.telematicsinsights.ai/#f3d37e40fc940dd8c17afc63044b956260f4a3a1092fefdb1922ad8cd32acbfc Telemetry Engine went down Flespi availability impacted https://status.telematicsinsights.ai/incident/800682 Thu, 08 Jan 2026 02:12:00 -0000 https://status.telematicsinsights.ai/incident/800682#fa2b1aac96f6bcd1cbd75d0fd6068e4c725b55d84f5cfceba5655900626573d0 Telemetry engine API provider, Flespi, downtime started, error: Failed to perform https://flespi.io GET request. Usually this indicates either flespi datacenter network uplink connection problem or when the platform is in the maintenance mode. Telemetry Engine recovered https://status.telematicsinsights.ai/ Thu, 08 Jan 2026 01:43:24 +0000 https://status.telematicsinsights.ai/#7560820abc337d2596afc8f84fd209f3cf1fbf1c5a881c2b3a44f1769aa81628 Telemetry Engine recovered Telemetry Engine went down https://status.telematicsinsights.ai/ Thu, 08 Jan 2026 01:41:53 +0000 https://status.telematicsinsights.ai/#7560820abc337d2596afc8f84fd209f3cf1fbf1c5a881c2b3a44f1769aa81628 Telemetry Engine went down Application availability impacted https://status.telematicsinsights.ai/incident/741219 Thu, 09 Oct 2025 15:33:00 -0000 https://status.telematicsinsights.ai/incident/741219#148bfa4bc6abe6503f335592027ec62c1214b3ea3a6d1c63a4d30f04b7a822cd Summary: During our routine weekly release, we encountered a third-party outage on GitHub that affected critical endpoints and webhooks: https://www.githubstatus.com/incidents/k7bhmjkblcwp (Github is the most popular code repository and used by millions of companies). As a result, our deployment process was blocked, and only a partial version of the application got released. Because webhooks were nonfunctional, our CI/CD pipeline could not receive updates or trigger jobs, leading to downtime. To mitigate risk and preserve stability, we gracefully rolled back to the prior stable version, paused further releases, and waited for GitHub to restore service. Root Cause: The root cause was an external service outage on GitHub affecting their webhook / API infrastructure. This prevented our CI/CD processes from completing and deploying fully. Impact: - Deployed services were only partially updated; some modules remained in an inconsistent state. - New builds and updates could not be triggered due to webhook failures in GitHub. - Users experienced unavailable functionality during the incident window. - Release rollout was postponed until GitHub’s systems became healthy again. Remediation & Resolution: - We executed an emergency rollback to the last known good release. - Deployment was halted while monitoring GitHub’s status updates. - After GitHub’s incident resolved, we validated system stability and resumed controlled deployment - We ran a complete system testing verified end-to-end functionality after releasing new updates properly. We apologize for any disruption this caused and thank you for your patience. If you have questions or need more details please reach out to support@telematicsinsights.ai Application availability impacted https://status.telematicsinsights.ai/incident/741219 Thu, 09 Oct 2025 15:33:00 -0000 https://status.telematicsinsights.ai/incident/741219#148bfa4bc6abe6503f335592027ec62c1214b3ea3a6d1c63a4d30f04b7a822cd Summary: During our routine weekly release, we encountered a third-party outage on GitHub that affected critical endpoints and webhooks: https://www.githubstatus.com/incidents/k7bhmjkblcwp (Github is the most popular code repository and used by millions of companies). As a result, our deployment process was blocked, and only a partial version of the application got released. Because webhooks were nonfunctional, our CI/CD pipeline could not receive updates or trigger jobs, leading to downtime. To mitigate risk and preserve stability, we gracefully rolled back to the prior stable version, paused further releases, and waited for GitHub to restore service. Root Cause: The root cause was an external service outage on GitHub affecting their webhook / API infrastructure. This prevented our CI/CD processes from completing and deploying fully. Impact: - Deployed services were only partially updated; some modules remained in an inconsistent state. - New builds and updates could not be triggered due to webhook failures in GitHub. - Users experienced unavailable functionality during the incident window. - Release rollout was postponed until GitHub’s systems became healthy again. Remediation & Resolution: - We executed an emergency rollback to the last known good release. - Deployment was halted while monitoring GitHub’s status updates. - After GitHub’s incident resolved, we validated system stability and resumed controlled deployment - We ran a complete system testing verified end-to-end functionality after releasing new updates properly. We apologize for any disruption this caused and thank you for your patience. If you have questions or need more details please reach out to support@telematicsinsights.ai Application availability impacted https://status.telematicsinsights.ai/incident/741219 Thu, 09 Oct 2025 14:51:00 -0000 https://status.telematicsinsights.ai/incident/741219#bc020535dbdd1790a87b6cfb0466c5744bb0e53c5cf3837e088d4766cf232619 During our routine weekly release, we encountered a third-party outage on GitHub that affected critical endpoints and webhooks: https://www.githubstatus.com/incidents/k7bhmjkblcwp Application availability impacted https://status.telematicsinsights.ai/incident/741219 Thu, 09 Oct 2025 14:51:00 -0000 https://status.telematicsinsights.ai/incident/741219#bc020535dbdd1790a87b6cfb0466c5744bb0e53c5cf3837e088d4766cf232619 During our routine weekly release, we encountered a third-party outage on GitHub that affected critical endpoints and webhooks: https://www.githubstatus.com/incidents/k7bhmjkblcwp Degraded Alerts Queue Processing https://status.telematicsinsights.ai/incident/739369 Tue, 07 Oct 2025 02:00:00 -0000 https://status.telematicsinsights.ai/incident/739369#d56b981067b137a9ed7a742a97099e1d3ad3719a98b1aadef0ef45cfed881104 On October 6, 2025 around 1pm UTC we identified an issue affecting the processing of triggered alerts with delayed execution. Delayed execution applies to alerts with a minimum duration greater than 0 seconds. Such alerts remained in our internal queue and did not execute as expected. Additionally, automated video requests with delayed execution were also not getting processed. The root cause was a configuration issue in our AWS Lambda queue, introduced after last production release on October 2, 2025. The AWS Lambda configuration issue was resolved by 6pm UTC the same day. Rebalancing script was also developed and deployed with the fix at that time. Delayed triggered alerts were gradually released within 3 hours and delayed video requests were gradually released within 7.5 hours from the time of fix deployment. The issue was fully resolved by October 7, 2025 2am UTC. We sincerely apologize for the inconvenience this may have caused. We’re implementing additional proactive monitoring to prevent similar issues in the future. Degraded Alerts Queue Processing https://status.telematicsinsights.ai/incident/739369 Thu, 02 Oct 2025 11:00:00 -0000 https://status.telematicsinsights.ai/incident/739369#87e2b3f7bf6612a9266acd2c54e646c2798fb6067243449684eb289963950f2d Degraded processing of triggered alerts with delayed execution. Degraded Performance Of Telemetry Engine https://status.telematicsinsights.ai/incident/706501 Thu, 14 Aug 2025 10:30:00 -0000 https://status.telematicsinsights.ai/incident/706501#843cc4fc6f91862bdc85c8f8ac0ffa910a91dcd03a0c7fc10ab09c1eea077c59 As part of the release on August 13, 2025, we deployed updates to driver data integration. This caused some algorithms to recalculate previously received device data for past periods, resulting in significant delays in alert triggers and notifications. Impact The issue began at 12:30 PM UTC on August 13, 2025, and was fully resolved by 11:30 AM UTC on August 14, 2025. Users experienced delayed alerts and notifications during this time. Resolution We have resolved the problem and implemented additional safeguards to prevent similar incidents in the future. We sincerely apologize to everyone affected. Degraded Performance Of Telemetry Engine https://status.telematicsinsights.ai/incident/706501 Wed, 13 Aug 2025 11:30:00 -0000 https://status.telematicsinsights.ai/incident/706501#01d2b3a015bab9f24a0505c7b26200ef780196376d33c0ae4eca0ae0200edeeb As part of the release, we performed updates related to driver data integration. As a result, some algorithms recalculated data previously received from devices for past periods. This led to significant delays in alert triggers and receiving notifications. Degraded Access to TSP Admin Console – maintenance overran by 34 min https://status.telematicsinsights.ai/incident/616840 Wed, 09 Jul 2025 14:05:00 -0000 https://status.telematicsinsights.ai/incident/616840#9e868a7a8eab822a1cb3e07ed00b57e72a0fc8c435ef88523e2cd16a3ccbceb9 Start time: 2025-07-09 13:31 UTC End time: 2025-07-09 14:05 UTC Duration: 34 minutes We rolled out a scheduled update introducing the new granular access-role system. During the migration, one of the database patch steps took longer than anticipated, holding an exclusive lock on the TSP Admin authorization. Impact * TSP Admin users experienced “403 / insufficient permissions” errors, login errors, or incomplete UI pages. * No data loss; fleet app, driver apps, and API traffic continued normally. We’re closely monitoring and apologize for the unexpected disruption. Degraded Access to TSP Admin Console – maintenance overran by 34 min https://status.telematicsinsights.ai/incident/616840 Wed, 09 Jul 2025 13:31:00 -0000 https://status.telematicsinsights.ai/incident/616840#efab2b51491752548a2f0a57f7a42893da91b578d2e8a6eecffb7dbbf7b3f893 We rolled out a scheduled update introducing the new granular access-role system. During the migration, one of the database patch steps took longer than anticipated, holding an exclusive lock on the TSP Admin authorization. Telemetry Engine Availability https://status.telematicsinsights.ai/incident/553775 Thu, 24 Apr 2025 12:29:00 -0000 https://status.telematicsinsights.ai/incident/553775#e791bb80457be2c637b602db62475fe48d938502a2cc752c13fa81228737c465 What happened? ⚠️ Between 11:04 and 11:35 AM UTC, a subset of requests to our Flespi telemetry backend began returning 403 Forbidden. Flespi support was contacted, who indicated that problem was related to a failed rolling deployment of one of their microservices, which caused that service to start up incorrectly and reject valid calls. Impact: Only requests routed to the mis-started node were affected. No customer-facing functionality was disrupted—our main Fleet and TSP Admin apps remained fully operational. We first detected it on a single simulator device during routine testing; no live customer devices were impacted. Resolution: Flespi team indicated they rolled back the failed service deployment and fixed the problem. We verified that all flespi API calls are now succeeding. We monitored logs and metrics to ensure full recovery. Status: ✅ Fully resolved as of 11:35 AM UTC. We apologize for any confusion this may have caused and appreciate your patience. Please reach out to us at support@telematicsinsights.ai if you observe any lingering issues.