TLDR Tony queried about Log processing component, comparing it with Loki/Elastic/Splunk. Ashish clarified its usage and planned updates. Tony also asked about Grafana compatibility, which Prabhat confirmed. last inquired about storage setup, and Prabhat described available modes.
We don't use inverted index...rather while searching we use brute force search as you mentioned
We use data partitioning to reduce scan volumn size based on time range for query
Yeah same with the people who all use Clickhouse :slightly_smiling_face:
reasonable approach
We also support custom partitioning keys
We will be adding adaptive partitioning based on query usage in future
Do let us know any feedback or suggestions
Also if you face any issue ..we would be glad to assist
Does it work with Grafana?
I am currently using Prometheus + Loki
There is a plug-in for grafana that u can use.
what kind of storage is used ? (i see in readme its using object storage) i was doing a setup from source dint require any database setup could you provide more detail on storage
You can either use local disk storage in single node mode or s3 in HA mode
Tony
Mon, 03 Jul 2023 02:05:31 UTCI see: "You should give a time range for each query or it will scan all data, it is a very expensive operate. So it seems like the Log processing component is more like Loki than Elastic/Splunk that supports inverted indices?