TLDR sparrow faced issues with ZincObserver integration, log filtering, and missing logs. Ashish and Hengfei provided solutions, including using str_match and an updated image. A future release will provide more error responses for ingestion.
1. Can you elaborate what do you mean by not able to find logs?
2. to send data to s3 , if you are using local mode set ZO_LOCAL_MODE_STORAGE = “s3” in env variables
Actully I’m trying to filter out logs based on some container
check this
or you can setup cluster mode
3. delete policy isnt available yet
is on roadmap
for search request you to check :
if you still face issue
let us know specifics of it
we will be able to assist
I’m not able to find logs for this specific container? Am i doing anything wrong here?
• match_all searches only the fields that are configured for full text search. Default set of fields are `msg, message, log, logs`. If you want more fields to be scanned during full text search, you can configure them under stream settings. You should use `str_match` for full text search in specific fields.
`try using str_match`
how to fliter logs of specific pod
?
instead of match_all
use function str_match
nothing is there with the same field
str_match(kubernetes_container_name,‘orchestrator’)
Nothing
do one thing
remove everything from search panel
add kubernetes_container_name to seach result
done now not able to list the index
call?
yes that would be helpful
join huddle
{“code”:20008,“message”:“Search SQL execute error”,“error_detail”:“Error during planning: Projection references non-aggregate values: Expression tbl._timestamp could not be resolved from available columns: tbl.key, COUNT(UInt8(1))“}
Can you please share backend logs as well
I mean ZincObserve server logs
[2023-03-31T11:38:56Z INFO zincobserve::service::search::datafusion::exec] Query agg:_count took 2.464 seconds.
[2023-03-31T11:38:56Z INFO zincobserve::service::search::datafusion::exec] Query agg sql: select date_bin(interval ‘5 second’, to_timestamp_micros(“_timestamp”), to_timestamp(‘2001-01-01T00:00:00’)) AS key, count(*) AS num FROM tbl WHERE ((_timestamp >= 1680261833538000 AND _timestamp < 1680262733538000) ) GROUP BY key ORDER BY key
[2023-03-31T11:38:56Z INFO actix_web::middleware::logger] 10.244.31.9 “POST /api/default/default/_json HTTP/1.1" 200 69 “9208” “-” “Fluent-Bit” 0.005976
[2023-03-31T11:38:56Z INFO actix_web::middleware::logger] 10.244.18.51 “POST /api/default/default/_json HTTP/1.1" 200 68 “1850” “-” “Fluent-Bit” 0.002087
[2023-03-31T11:38:56Z INFO actix_web::middleware::logger] 10.244.18.51 “POST /api/default/default/_json HTTP/1.1" 200 68 “8552” “-” “Fluent-Bit” 0.004857
[2023-03-31T11:38:56Z INFO actix_web::middleware::logger] 10.244.31.9 “POST /api/default/default/_json HTTP/1.1" 200 69 “14241” “-” “Fluent-Bit” 0.029486
[2023-03-31T11:38:56Z ERROR zincobserve::service::search::datafusion::exec] aggs sql execute failed, session: Session { id: “3b970678-1409-4d6f-b5eb-e7c1a395320c”, data_type: Cache }, sql: select date_bin(interval ‘5 second’, to_timestamp_micros(“_timestamp”), to_timestamp(‘2001-01-01T00:00:00’)) AS key, count(*) AS num FROM tbl WHERE ((_timestamp >= 1680261833538000 AND _timestamp < 1680262733538000) ) GROUP BY key ORDER BY key, err: Plan(“Projection references non-aggregate values: Expression tbl._timestamp could not be resolved from available columns: tbl.key, COUNT(UInt8(1))“)
[2023-03-31T11:38:56Z ERROR zincobserve::service::search::cache] datafusion execute error: Error during planning: Projection references non-aggregate values: Expression tbl._timestamp could not be resolved from available columns: tbl.key, COUNT(UInt8(1))
[2023-03-31T11:38:56Z ERROR zincobserve::handler::http::request::search] search error: ErrorCode(SearchSQLExecuteError(“Error during planning: Projection references non-aggregate values: Expression tbl._timestamp could not be resolved from available columns: tbl.key, COUNT(UInt8(1))“))
[2023-03-31T11:38:56Z INFO actix_web::middleware::logger] 10.244.31.9 “POST /api/default/_search?type=logs HTTP/1.1” 500 205 “310" “
checking
sorry, i can’t find the problem for this logs, Can you compress the directory `data` and give us for debug?
get the same error, thanks for you data.
i will debug for that.
public.ecr.aws/zinclabs/zincobserve-dev:v0.3.1-75ec795
sparrow we fixed the issue, you can use this image for test.
Thanks Hengfei will check today with this image and let you in any case
I thinks it’s working fine now. But have one question, I thinks some logs getting missed from specific pods
Like what?
like in pod logs I can see some error or some details but not visible on zinc
do you have any functions
on the streams which may result in discarding ingested records?
Nope
Ashish maybe we can let sparrow wait our next release, we will add more error response for ingestion.
they will see some error if we drop some records.
Any ETA for the same?
sparrow
Fri, 31 Mar 2023 10:58:09 UTCHello Team, I was trying to integrate ZincObserver to our cluster, and it’s deployed successfully. But have few questions. 1. i’m not able to find the logs accordingly 2. Wants to send the data to s3 and fetch the data from s3 only 3. Want to apply delete policy for specific time interval How to achieve all these things?