#openobserve

Troubleshooting Openobserve Issue On Linux Machine

TLDR Sushma encountered difficulties viewing pods and creating multiple streams in Openobserve. Prabhat and Hengfei suggested upgrading the software, adjusting SQL syntax, and amending the configuration setup. Despite progress, the issue remains partially unresolved.

Powered by Struct AI

2

Nov 14, 2023 (2 weeks ago)
Sushma
Photo of md5-70a65c96b293c1aafa37eaaefeaefa64
Sushma
06:44 AM
Hi #openobserve team,

I setup the self hosted openobserve on a linux machine, and configured the fluent bit as a daemonset to collect and push logs from minikube deployments to self hosted openobserve.
But I am unable to see all the pods which are running in my minikube.
And please help me with proper SQL syntax to check for specific pod logs.

Openobserve Version : v0.6.4

Let me know if you need any other information.
Prabhat
Photo of md5-23052f31f8f3c4b1bb3297fbc3a2aec5
Prabhat
03:15 PM
We will need more information to help you on why the logs are not showing. Try 0.7.0 This has prebuilt ingesters that will allow you to get logs quickly. Upgrade and your life will be simplified.
03:15
Prabhat
03:15 PM
Regarding SQL syntax - Take a look at example queries here - https://openobserve.ai/docs/example-queries/
03:17
Prabhat
03:17 PM
There are a lot of functions available here - https://openobserve.ai/docs/functions/ . You could use these too
Nov 15, 2023 (2 weeks ago)
Sushma
Photo of md5-70a65c96b293c1aafa37eaaefeaefa64
Sushma
07:40 AM
Thanks for the response Prabhat..
I would like to create multiple streams in openobserve.
Apart from default, able to create a new stream.
But when I am trying to create another new stream, changes aren't reflecting.

Please help on this..
Hengfei
Photo of md5-c30bb074b7d997d2cd6e689678b65dc1
Hengfei
07:41 AM
How did you create mutiple streams?
Sushma
Photo of md5-70a65c96b293c1aafa37eaaefeaefa64
Sushma
07:52 AM
I am using fluent-bit to export the logs from minikube to openobserve.

I have defined the output section like this in fliebeat configmap.

[OUTPUT]
URI /api/default/allfunds/_json
Host **
Port 5080
tls Off
Format json
Json_date_key _timestamp
Json_date_format iso8601
HTTP_User *
HTTP_Passwd *
compress gzip

Here. "allfunds" is my stream name
Hengfei
Photo of md5-c30bb074b7d997d2cd6e689678b65dc1
Hengfei
07:54 AM
after change the config, maybe you need recreate the filebeat pod, did you do that?
07:54
Hengfei
07:54 AM
and, you can have a look the logs of openobserve, it shows the http requet logs for default.
07:54
Hengfei
07:54 AM
You can see what is the ingest request
Sushma
Photo of md5-70a65c96b293c1aafa37eaaefeaefa64
Sushma
08:54 AM
Yes, after changing the configmap, I have recreated the Pod..
09:17
Sushma
09:17 AM
I am able to create multiple streams now in openobserve,
But unable to see any pods in openobserve UI, checked the fluent-bit logs, getting the below warning..

[2023/11/15 09:14:55] [ warn] [engine] failed to flush chunk '1-1700036229.123663863.flb', retry in 1508 seconds: task_id=1751, input=tail.1 > output=es.0 (out_id=0)
[2023/11/15 09:14:56] [ warn] [engine] failed to flush chunk '1-1700035046.129490961.flb', retry in 14 seconds: task_id=565, input=tail.1 > output=es.0 (out_id=0)
[2023/11/15 09:14:56] [ warn] [engine] failed to flush chunk '1-1700034792.129911216.flb', retry in 947 seconds: task_id=311, input=tail.1 > output=es.0 (out_id=0)

Please help me to resolve this..
09:19
Sushma
09:19 AM
I am not getting any "error" logs in openobserve.
Curerntly using the openobserve version 0.7.0
Hengfei
Photo of md5-c30bb074b7d997d2cd6e689678b65dc1
Hengfei
09:20 AM
i can't see the reason from those errors.
09:20
Hengfei
09:20 AM
For now, Is it working with the default stream?
09:21
Hengfei
09:21 AM
maybe you need scrollup and check the root error in fluent-bit logs.
09:21
Hengfei
09:21 AM
this error tell us nothing.
Sushma
Photo of md5-70a65c96b293c1aafa37eaaefeaefa64
Sushma
09:28 AM
I have removed the configuration of default stream..

For now , I have 2 streams..
->Allfunds
->Kestra
In the UI, I am unable to fetch the pod names for allfunds, openobserve is throwing an error like "Unable to fetch the field values "

Please find the screenshot for reference.
Image 1 for I have removed the configuration of default stream..

For now , I have 2 streams..
->Allfunds
->Kestra
In the UI, I am unable to fetch the pod names for allfunds, openobserve is throwing an error like "Unable to fetch the field values "

Please find the screenshot for reference.
09:29
Sushma
09:29 AM
and below is the full trace of error..

[2023/11/15 07:48:17] [ info] [output:http:http.2] 172.16.22.22:5080, HTTP status=200
{"code":200,"status":[{"name":"kestra","successful":4,"failed":0}]}
[2023/11/15 07:48:18] [ warn] [engine] failed to flush chunk '1-1700034497.124087455.flb', retry in 8 seconds: task_id=16, input=tail.1 > output=es.0 (out_id=0)
[2023/11/15 07:48:18] [ warn] [engine] failed to flush chunk '1-1700034491.123633763.flb', retry in 18 seconds: task_id=10, input=tail.1 > output=es.0 (out_id=0)
Hengfei
Photo of md5-c30bb074b7d997d2cd6e689678b65dc1
Hengfei
09:29 AM
[2023/11/15 07:48:18] [ warn] [engine] failed to flush chunk '1-1700034491.123633763.flb', retry in 18 seconds: task_id=10, input=tail.1 > output=es.0 (out_id=0)
09:29
Hengfei
09:29 AM
it report es.0 output has problem.
09:30
Hengfei
09:30 AM
what is the first output of es?
Sushma
Photo of md5-70a65c96b293c1aafa37eaaefeaefa64
Sushma
09:30 AM
ok let me provide you the complete fluent-bit configuration
09:34
Sushma
09:34 AM
This is the complete fluent-bit configuration


[PARSER]
Name dockerno_time
Format json
Time_Keep Off
Time_Key time
Time_Format %Y-%m-%dT%H:%M:%S.%L
fluent-bit.conf: |
[SERVICE]
Daemon Off
Flush 1
Log_Level info
Parsers_File /fluent-bit/etc/parsers.conf
Parsers_File /fluent-bit/etc/conf/custom_parsers.conf
HTTP_Server On
HTTP_Listen 0.0.0.0
HTTP_Port 2020
Health_Check On

[INPUT]
Name tail
Path /var/log/containers/*_allfunds
.log
multiline.parser docker, cri
Tag kube.

MemBuf_Limit 5MB
Skip_Long_Lines On

[INPUT]
Name tail
Path /var/log/containers/*_kestra
.log
multiline.parser docker, cri
Tag kube.

MemBuf_Limit 5MB
Skip_Long_Lines On

[INPUT]
Name systemd
Tag host.*
Systemd_Filter _SYSTEMD_UNIT=kubelet.service
Read_From_Tail On

[OUTPUT]
Name es
Match *
Host *
Logstash_Format On
Retry_Limit False

[OUTPUT]
Name http
Match *_allfunds
*
URI /api/default/allfunds/json
Host *
Port 5080
tls Off
Format json
Json_date_key _timestamp
Json_date_format iso8601
HTTP_User **

HTTP_Passwd *
compress gzip

[OUTPUT]
Name http
Match *_kestra
*
URI /api/default/kestra/_json
Host **
Port 5080
tls Off
Format json
Json_date_key _timestamp
Json_date_format iso8601
HTTP_User **
HTTP_Passwd **
compress gzip
Hengfei
Photo of md5-c30bb074b7d997d2cd6e689678b65dc1
Hengfei
09:35 AM
[OUTPUT]
        Name es
        Match *
        Host ***
        Logstash_Format On
        Retry_Limit False
09:35
Hengfei
09:35 AM
this is the es.0
09:36
Hengfei
09:36 AM
from the logs, it reports es.0, has some problem.
Sushma
Photo of md5-70a65c96b293c1aafa37eaaefeaefa64
Sushma
09:36 AM
But, unable to get the exact issue from the logs..
Hengfei
Photo of md5-c30bb074b7d997d2cd6e689678b65dc1
Hengfei
09:37 AM
from your provided log, yes.
09:37
Hengfei
09:37 AM
it should report something in the logs in earlier.
09:37
Hengfei
09:37 AM
you should check all of the fluentbit logs.
09:40
Hengfei
09:40 AM
just recreate the fluentbit, then check the logs, it should report the error at beginning.
Sushma
Photo of md5-70a65c96b293c1aafa37eaaefeaefa64
Sushma
09:46 AM
I just removed the output section which is containing the "es" and recreated the pod...For now it is ok..

But in Openobserve UI, unable to see the pod names..
kubernetes_pod_name field is showing null..
Image 1 for I just removed the output section which is containing the "es" and recreated the pod...For now it is ok..

But in Openobserve UI, unable to see the pod names..
kubernetes_pod_name field is showing null..
09:46
Sushma
09:46 AM
Logs are dumping to openobserve, but podnames are invisible
09:55
Sushma
09:55 AM
Not only "kubernetes_pod_name", every other filed is throwing the same error like "unable to fetch the field values" in version 0.7.0
Hengfei
Photo of md5-c30bb074b7d997d2cd6e689678b65dc1
Hengfei
10:02 AM
it is null, just it is null. there is no pod_name for your upload data.
10:05
Hengfei
10:05 AM
This is my configure:
apiVersion: v1
kind: ConfigMap
metadata:
  name: fluent-bit
  namespace: logging
  labels:
    : fluent-bit
    : Helm
    : fluent-bit
    : 2.0.8
    : fluent-bit-0.21.7
  annotations:
    : fluent-bit
    : logging
data:
  custom_parsers.conf: |
    [PARSER]
        Name docker_no_time
        Format json
        Time_Keep Off
        Time_Key time
        Time_Format %Y-%m-%dT%H:%M:%S.%L
  fluent-bit.conf: |
    [SERVICE]
        Daemon Off
        Flush 1
        Log_Level info
        Parsers_File parsers.conf
        Parsers_File custom_parsers.conf
        HTTP_Server On
        HTTP_Listen 0.0.0.0
        HTTP_Port 2020
        Health_Check On

    [INPUT]
        Name tail
        Path /var/log/containers/*.log
        multiline.parser docker, cri
        Tag kube.*
        Mem_Buf_Limit 5MB
        Skip_Long_Lines On

    [INPUT]
        Name systemd
        Tag host.*
        Systemd_Filter _SYSTEMD_UNIT=kubelet.service
        Read_From_Tail On

    [FILTER]
        Name kubernetes
        Match kube.*
        Merge_Log On
        Keep_Log Off
        K8S-Logging.Parser On
        K8S-Logging.Exclude On

    # [OUTPUT]
    #     Name es
    #     Match kube.*
    #     Host elasticsearch-master
    #     Logstash_Format On
    #     Retry_Limit False
  
    [OUTPUT]
        Name http
        Match *
        URI /api/default/alpha2/_json
        Host zinclabs.dev
        Port 443
        tls On
        Format json
        Json_date_key    _timestamp
        Json_date_format iso8601
        HTTP_User 
        HTTP_Passwd *****
        compress gzip
10:05
Hengfei
10:05 AM
i saw i have filter but you don't have it.
10:05
Hengfei
10:05 AM
can you try add the filter section?
Sushma
Photo of md5-70a65c96b293c1aafa37eaaefeaefa64
Sushma
10:06 AM
Sure...let me try adding the filter name
10:15
Sushma
10:15 AM
I have added the FILTER section..
In my "allfunds" namespace, having around 10 pods, but in Openobserve UI , only one pod is visible, and remaining all are invisble..
Image 1 for I have added the FILTER section..
In my "allfunds" namespace, having around 10 pods, but in Openobserve UI , only one pod is visible, and remaining all are invisble..
Hengfei
Photo of md5-c30bb074b7d997d2cd6e689678b65dc1
Hengfei
10:17 AM
that is not about openobserve
10:17
Hengfei
10:17 AM
you need to know that is about ingest part have no the labels.
10:17
Hengfei
10:17 AM
after added the filter, we already have one pod name, right?
10:18
Hengfei
10:18 AM
let's wait for 10 minutes, let more data ingested, let's see, if it report more pod name.
Sushma
Photo of md5-70a65c96b293c1aafa37eaaefeaefa64
Sushma
10:18 AM
Yes, after adding the filter we got one pod name..
Earlier we used to get unable to fetch the field values for all fields..
10:19
Sushma
10:19 AM
Sure, lets wait for sometime and see

1

10:35
Sushma
10:35 AM
Now, the kubernetes_pod_names field is displaying null value and along with other pod names..
Image 1 for Now, the kubernetes_pod_names field is displaying null value and along with other pod names..

1

Hengfei
Photo of md5-c30bb074b7d997d2cd6e689678b65dc1
Hengfei
10:37 AM
maybe some logs has no pod_name.
10:37
Hengfei
10:37 AM
you can check what is the logs.

OpenObserve

OpenObserve is an open-source, petabyte-scale observability platform for the cloud native realm, offering a 10x cost reduction and 140x less storage use compared to competitors like Elasticsearch or Splunk. Built in Rust for exceptional performance, it offers comprehensive features like logs, metrics, traces, dashboards, and more | Knowledge Base powered by Struct.AI

Indexed 406 threads (74% resolved)

Join Our Community