Show HN: Telescope – an open-source web-based log viewer for logs in ClickHouse

github.com

171 points by r0b3r4 3 days ago

Hey everyone! I’m working on Telescope - an open-source web-based log viewer designed to make working with logs stored in ClickHouse easier and more intuitive.

I wasn’t happy with existing log viewers - most of them force a specific log format, are tied to ingestion pipelines, or are just a small part of a larger platform. Others didn’t display logs the way I wanted.

So I decided to build my own lightweight, flexible log viewer - one that actually fits my needs.

Check it out:

    Video demo: https://www.youtube.com/watch?v=5IItMOXwugY

    GitHub: https://github.com/iamtelescope/telescope

    Live demo: https://telescope.humanuser.net

    Discord: https://discord.gg/rXpjDnEc
piterrro 2 days ago

There's also Logdy (https://github.com/logdyhq/logdy-core) that can work with raw files and comes with a UI as well in a single precompiled binary so no need for installs and setups. If you're looking for a simple solution for browsing log files with a web UI, this might be it! (I'm the author)

  • corytheboyd 2 days ago

    Heyo I’ve noticed Lodgy come up a few times on HN now, and was curious if you explored making it a proper desktop application instead of a two-part UI and CLI application. Did you rule that out for some reason?

    • piterrro a day ago

      I'm not ruling that out, however there was no user feedback that's the use case honestly. So far users love that they can just drop a binary on a remote server and sping up a web UI. Similar with the local env. The nature of Logdy is that it's primarily designed to work in the CLI. What would be the use case for a desktop app?

nh2 2 days ago

It would be great if the logs could describe a bit what exactly one has to do to use this as an alternative to Grafana Loki.

How do I get my logs (e.g. local text files from disk like nginx logs, or files that need transformation like systemd journal logs) into ClickHouse in a way that's useful for Telescope?

What kind of indices do I have to configure so that queries are fast? Ideally with some examples.

How can I make that full-text substring search queries are fast (e.g. "unexpected error 123")? When I filter with regex, is that still fast / use indices?

From the docs it isn't quite clear to me how to configure the system so that I can just put a couple TB of logs into it and have queries be fast.

Thanks!

  • r0b3r4 2 days ago

    Telescope is primarily focused on log visualization, not on log collection or preparing ClickHouse for storage. The system does not currently provide (and I think will not ever) built-in mechanisms for ingesting logs from any sources.

    I will consider providing a how-to guide on setting up log storage in ClickHouse, but I’m afraid I won’t be able to cover all possible scenarios. This is a highly specific topic that depends on the infrastructure and needs of each organization.

    If you’re looking for a all-in-one solution that can*both collect and visualize logs, you might want to check out https://www.highlight.io or https://signoz.io or other similar projects.

    And also, by the way, I’m not trying to create a "Grafana Loki killer" or a "killer" of any other tool. This is just an open source project - I simply want to build a great log viewer without worrying about how to attract users from Grafana Loki or Elastic or any other tool/product.

    • nh2 a day ago

      I think such a guide would be great.

      My perspective:

      A lot of people who operate servers (including me) just want to view and search their logs -- fast and convenient. Your tool provides that. They don't care about whether the backend uses ClickHouse or Postgres or whatever, that's just a pesky detail. They understand they may have to deal with it to some extent, but they don't want to have to become experts at those, and to conclude everything by themselves, just to read their logs.

      Also, those things are general-purpose databases, so they don't tell the user how to best set them up so your tool can produce results fast and convenient. So currently, neither side helps the user with that.

      That's why it's best if your tool's docs gives some basic tips on how to achieve the most commonly desired goals: Some basic way to get logs into the backend DB (if there's a standard way to do that for text log files and journald, probably fine to just link it), and docs on what indices Telescope needs to be faster than grep for typical log search tasks (ideally with some quick snippet or link on how to set those up, for people who haven't used ClickHouse before).

      So overall, it's fine if the tool doesn't do everything. But it should say what it needs to work well.

  • sleepybrett 2 days ago

    As someone who has never worked anywhere that tried it out, what do you not like about loki. I've been stuck in the very expensive splunk and opensearch/kibana mines for many years and I find it an amazingly frustrating place to be. I honestly find that I can better debug via logs using grep than either of those tools.

    • nh2 a day ago

      Loki works fine for what it does; the problem is what it lacks.

      It doesn't do full-text search indices. So if you just search for some word across all your logs (to find eg when a rare error happened), it is very slow (it runs the equivalent of grep, at 500 MB/s on my machine). If you have a couple TB, it takes half an hour!

      As you say, even plain grep is usually faster for such plain linear search.

      I want full-text indices so that such searches take milliseconds, or a couple seconds at most.

      • sleepybrett a day ago

        see to me, having at one point been responsible for maintaining an ES instance for logs (and exporters and all the other bits) I feel like the prices you pay in engineering hours and hardware costs to maintain all those indexes while keeping ES from absolutely melting down is way too high.

        I think grep is amazing but yes if you unleash it on 'all the logs' without narrowing yourself down to a time frame first or some other taxonomy is going to be slow. This seems like a skill issue, frankly.

        Also full text indexes for all the things are generally FASTER of course, but seconds/milliseconds? How much hardware are you throwing at logs. Most only go to logs in an emergency, during an incident and the like. How much are you paying just to index a bunch of shit that will probably never even be looked at, and how much are you paying for hardware to run queries on those indexes that will be largely idle.

        The problems with ES/Splunk for logs is that they were not designed for logs, so they are both, in my view, overkill AND underkill for the task. Full fuzzy text serch is probably overkill, the UI for the task of dealing with log data is underkill. (The cloud bills are certainly overkill)

        I'm currently doing platform engineering at a company in the top half of the fortune 500. Honestly, probably about 90-95% of the time when I'm helping a team troubleshoot their service on kubernetes I'm using the kubectl `stern` plugin (shows log streams from all pods that match a label query) and grep/sed/awk/jq if it's ongoing, it's just waaaaay more responsive. If it's a 'weird thing happened last night, investigate' task and I have to go to Kibana it's just a much worse experience overall.

        • nh2 11 hours ago

          It should not take engineering time to have a database compute full-text indices. In sensible systems, you do "CREATE INDEX" and done.

          To search multiple TBs of logs, you need a single 40 $/month server containing an 8 TB SSD running sensible software/index algorithm.

          I agree that ElasticSearch is bloated and needs undue engineering time. But it doesn't need to be that way.

          For example Quickwit finds things subsecond.

          It's a huge improvement when queries go from 10 minutes linear search to instant.

          (Its index is still not perfect for me because it doesn't support fully simple exact prefix/infix search, but otherwise it does the job fast with few resources.)

          > Full fuzzy text serch is probably overkill

          Yes, I think most people don't need fuzzy search for log search. They just need indexed grep.

          > I think grep is amazing but yes if you unleash it on 'all the logs' without narrowing yourself down to a time frame first or some other taxonomy is going to be slow. This seems like a skill issue, frankly.

          Right, grep is not the tool for the job. It's neglecting all sensible algorithms that solve this problem. It's like saying "I don't use binary search, only linear search", and spend human effort to pre-select the range so that it's fast enough.

          When you're searching for the rare bugs, you also can't just limit the the time frame.

azophy_2 2 days ago

Is there any comprehensive guide in building observability stack using otel, clickhouse, and grafana? I think this is a solid stack for logging & tracing, but I've been looking into it but haven't found any authoritative reference for this particular stack (unlike ELK & LGTM stack)

vortegne 3 days ago

Unfortunate name choice, as @csh602 mentioned

Viewer looks pretty good though. Reminds me of DataDog UI, but not as slow. Will play around more, thanks!

  • r0b3r4 2 days ago

    As we all know, naming is an unsolvable problem in IT :)

    Regarding performance - 95% of Telescope's speed depends on how fast your ClickHouse responds. If you have a well-optimized schema and use the right indexes, Telescope's overhead will be minimal.

tacker2000 3 days ago

Looks cool I might try it out!

I need a central place, something simple where I can actually read the contents of the logs that are generated by the dozen of services that I run for clients, etc… instead of stupidly SSH’ing to every server.

Does this fit the use case?

I tried Loki once but it was painful to set up and more geared toward aggregating events and stats.

  • r0b3r4 2 days ago

    Thanks! Telescope is more focused on displaying logs and providing access to them rather than handling log ingestion. In the future, I plan to support various sources like Docker, k8s, and files to improve the local development workflow. However, it's unlikely that Telescope will support fetching logs from remote servers via SSH, as that's not its primary use case.

  • xorcist 2 days ago

    If all you want is the plaintext logs, there's no need to bother with special products. Just point syslog in the right direction as if it was 1995. Everything can log to syslog already. Things like Splunk, Graylog and Kibana are mostly for visualization and query interfaces.

  • homebrewer 2 days ago

    Graylog is a pretty standard solution to your problems (I believe), although they've been closing down their licensing more and more as time goes on.

  • iwanhae 3 days ago

    I’m curious to know what makes the Loki installation process so painful.

    I’m interested in learning more about the software installation experience.

    • samsk 2 days ago

      Only problematic thing might be relatively frequent storage changes (like they like to deprecate primary storage driver), otherwise its IMHO easy to setup. I'm running it on several projects, because it doesn't needs beefy machine like Elastic or even ClickHouse.

mikeshi42 2 days ago

This looks pretty cool, I love seeing more clickhouse-native logging platforms springing up! It's a surprisingly underrated platform to build on when I talk to other engineers.

I'm one of those authors of an existing log viewer (hyperdx) and was curious if we were one of those platforms that didn't fit your needs? Always love learning what use cases inspire different approaches.

kbumsik 2 days ago

How is it different from Signoz, a complete observability stack (including Logs) built on top of Clickhouse?

  • r0b3r4 2 days ago

    Telescope is focused purely on viewing logs for existing data. It doesn’t enforce any specific ingestion setup or schema and doesn’t support traces or session storage.

    You can think of it as just one part of a logging platform, where a full platform might consist of multiple components like a UI, ingestion engine, logging agent, and storage. In this setup, Telescope is only the UI.

smjburton 2 days ago

Would this also work with something like Plausible (https://github.com/plausible/analytics) which uses ClickHouse to store web analytics data, or is it primarily for log data?

  • r0b3r4 2 days ago

    Despite the fact that Telescope is focused on application log data, it could be used for any type of data as long as it's stored in ClickHouse and has some time fields.

    At the moment, I have no plans to support arbitrary data visualization in Telescope, as I believe there are better BI-like tools for that scenario.

    • smjburton 2 days ago

      Yeah that's fair, thank you.

darkstar_16 2 days ago

I like how this is mostly based on the Kibana UI. Makes easier to convince other people to move to it.

  • r0b3r4 2 days ago

    To be honest, I was more inspired by DataDog :)

    • danmur 2 days ago

      I've used graylog the most so that's what it looks like to me :P. I like how you can do a bunch of extraction stuff right there in the query interface though, that's awesome. It seems like a very thoughtful UI.

  • sleepybrett 2 days ago

    Honestly that pushes me away from it. I find kibana to be a very frustrating experience.

    • mikeshi42 2 days ago

      (not op) curious what you find frustrating about it?

      • sleepybrett a day ago

        at the enterprise scale on the backend you end up paying for a bunch of indexing you will likely never use. On top of that you spend a LOT of money in engineering hours setting up indexes for many teams all with different log formats so the whole thing doesn't just melt down.

        On the kibana side, their query language is unshared by any other tool, at least any that I use, meaning that in the middle of an outage I end up chasing my tail reading docs on how to query what you want. The returns are often slow and it's very hard to just export the logs you do find to text files so you can ingest them into other tools.

        I mean I came up on cat/gerp/awk/sed/less/tail/(more recently jq for json logs) .. it wasn't perfect but it was RESPONSIVE and portable.

        I just think that tools like ES/Splunk weren't conceived for dealing with logs (especially if your logs come in many formats) and are both overkill and at the same time underkill for the task. It's like using a ball peen hammer to drive nails, you can certainly DO it, but a claw hammer is cheaper and a more ergonomic experience.

new_user_final 2 days ago

Rollbar has a feature to upload JavaScript sourcemaps files. When I am viewing logs from minified js files, it automatically apply sourcemaps and correctly shows line number.

Is there any open source tool that does the same?

ericb 2 days ago

Very cool! Just digging in. Does it works with the new JSON format clickhouse introduced recently?

Also, what service did you use to make the video, if you don't mind my asking?

  • r0b3r4 2 days ago

    Thanks!

    I haven't tested the new JSON format in ClickHouse yet, but even if something doesn't work at the moment, fixing it should be trivial.

    As for the video service, it wasn’t actually a service but rather a set of local tools:

    - Video capture/screenshots - macOS default tools

    - Screenshot editing - GIMP

    - Voice generation - https://elevenlabs.io/app/speech-synthesis/text-to-speech

    - Montage – DaVinci Resolve 19

akdor1154 2 days ago

Cool! I'm currently playing with the Grafana Clickhouse connector to do broadly similar - are these compatible? Can Telescope read an OTEL logs table in Clickhouse?

  • r0b3r4 2 days ago

    Yes, this is exactly where Telescope can be useful (and actually, the way Grafana displays logs was my motivation for writing my own viewer)

    Telescope can work with any table in ClickHouse. Of course, not every single ClickHouse type has been tested, but there shouldn’t be any issues with the most common ones

    If you want, you can check how it works with the OTEL schema in the live demo here: https://telescope.humanuser.net/sources/otel-demo/explore

oulipo 2 days ago

Very cool! Would be nice to have a library for the frontend components for the log viewer, to be able to reuse them in other projects :)

  • r0b3r4 2 days ago

    Nice idea! However, I’m not experienced enough with Vue (and frontend) development to properly design an exportable component. So, at least for now, I don’t think I’ll be able to make it happen myself.

alrocar 2 days ago

Awesome stuff! Just published something similar today

Just curious, what is the most challenging thing in your opinion when building such log viewer?

  • r0b3r4 2 days ago

    That sounds great! Do you have a link? I'd love to check it out.

    For me, the most challenging parts are still ahead - live tailing and a plugin system to support different storage backends beyond just ClickHouse. Those will be interesting problems to solve! What was the biggest challenge for you?

VectorLock 3 days ago

Look out, Kibana, they're gunning for you!

0x3331 a day ago

Very cool! Tested it with the demo, very smooth!

helsinki 2 days ago

Just curious, as I'm in the market - why should I use this instead of the ELK stack?

  • r0b3r4 2 days ago

    Well, if you're happy with ELK, you should definitely use it! As I mentioned earlier, I’m not trying to sell anything or convince people to switch from their current solutions - just offering an alternative perspective on how things can be done.

    From my perspective, a ClickHouse-based setup can be cheaper and possibly faster in certain conditions – here’s some comparison made by ClickHouse Inc. - https://clickhouse.com/blog/clickhouse_vs_elasticsearch_the_...

    My motto is "Know your data". I’m not a big fan of schemaless setups - I believe in designing a proper database schema rather than just pushing data into a black hole and hoping the system will handle it.

  • SkipperCat 2 days ago

    We've found that ClickHouse is extremely fast for write-once/read-many. So its great for recording logs. If Telescope provides the search/index features that Elastic provides, this could be a nice performance bump. FWIW, I haven't tested Telescope, so this is all just my musing.

Dowwie 2 days ago

It can display logs in-context. Awesome!

csh602 3 days ago

Looks simple and clean! Big ups for starts of good screenshots, docs, and quickstart (Docker) instructions.

Regarding the name, "Telescope" is also the name of a Neovim fuzzy finder[0] that dominates the ecosystem there. Other results appear by searching "telescope github".

[0]: https://github.com/nvim-telescope/telescope.nvim

  • Purplish9893 2 days ago

    Clearly we need an extension to search this new service with telescope.nvim. telescope-telescope.nvim.

  • r0b3r4 2 days ago

    Well, every single name I came up with was already taken and present in GitHub. So...

throwaw12 3 days ago

This one seems to be optimized for log viewing at the moment, are there any DataDog alternatives built on top of Clickhouse, which supports full range of OpenTelemetry features?