Logging and tracing
Answered
Saint Hubert Jura Hound posted this in #help-forum
Saint Hubert Jura HoundOP
How do y'all handle logging and tracing in self hosted production? Was trying to use grafana+victorialogs but every multi-line log was getting its own log line in grafana/vmlogs instead of getting grouped together (bit hard to explain)
So I just stopped collecting logs for nextjs for now but its unfortunate not being able to reference logs from traces
So I just stopped collecting logs for nextjs for now but its unfortunate not being able to reference logs from traces
Answered by B33fb0n3
Inside the nextjs app I like to use the @vercel/otel package as it configures everything for nextjs. Like that, all the nextjs relevant stuff is already handled.
For my own logs that I have inside an api route or when I want to debug something or log an id or whatever, then I am using the Pino-logger. Pino in general is already pretty nice and there is a package that automatically sends them to OTel when the .env variables are correctly configured. Like that custom logs are also collected
For my own logs that I have inside an api route or when I want to debug something or log an id or whatever, then I am using the Pino-logger. Pino in general is already pretty nice and there is a package that automatically sends them to OTel when the .env variables are correctly configured. Like that custom logs are also collected
20 Replies
@Saint Hubert Jura Hound How do y'all handle logging and tracing in self hosted production? Was trying to use grafana+victorialogs but every multi-line log was getting its own log line in grafana/vmlogs instead of getting grouped together (bit hard to explain)
So I just stopped collecting logs for nextjs for now but its unfortunate not being able to reference logs from traces
I like to use the LGTM stack with otel-collector. Metrics then help me to identify that there is an issue, traces tell me then what happened and why it happened and the logs tell me exact details about the specific request.
When it comes to monitoring your mind model need to change a bit. Before: I log everything, that helps me. After: split between metrics, tracing and logging to have the most consistent and needed information
There are serveral tutorials on how you setup the LGTM stack with otel-collector
When it comes to monitoring your mind model need to change a bit. Before: I log everything, that helps me. After: split between metrics, tracing and logging to have the most consistent and needed information
There are serveral tutorials on how you setup the LGTM stack with otel-collector
@B33fb0n3 I like to use the LGTM stack with otel-collector. Metrics then help me to identify that there *is* an issue, traces tell me then what happened and why it happened and the logs tell me exact details about the specific request.
When it comes to monitoring your mind model need to change a bit. Before: I log everything, that helps me. After: split between metrics, tracing and logging to have the most consistent and needed information
There are serveral tutorials on how you setup the LGTM stack with otel-collector
Saint Hubert Jura HoundOP
Hey brother, I have otel running and do use something similar to lgtm. My problem isnt tracing its just the fact i dont know how to properly send/collect nextjs logs. Structured logging would be my preferred choice but i havent found a simple way to set this up yet
My issue specifically was the fact that by default the way nextjs logs anything, even startup logs, arent grouped together in a single line. Theyre split by.. idek. Vibes i guess? Vercel just wanted to make them look pretty visually but when u try to collect them they show up as individual messages in grafana. Ill send a screenshot example later
@Saint Hubert Jura Hound My issue specifically was the fact that by default the way nextjs logs anything, even startup logs, arent grouped together in a single line. Theyre split by.. idek. Vibes i guess? Vercel just wanted to make them look pretty visually but when u try to collect them they show up as individual messages in grafana. Ill send a screenshot example later
Inside the nextjs app I like to use the @vercel/otel package as it configures everything for nextjs. Like that, all the nextjs relevant stuff is already handled.
For my own logs that I have inside an api route or when I want to debug something or log an id or whatever, then I am using the Pino-logger. Pino in general is already pretty nice and there is a package that automatically sends them to OTel when the .env variables are correctly configured. Like that custom logs are also collected
For my own logs that I have inside an api route or when I want to debug something or log an id or whatever, then I am using the Pino-logger. Pino in general is already pretty nice and there is a package that automatically sends them to OTel when the .env variables are correctly configured. Like that custom logs are also collected
Answer
@B33fb0n3 Inside the nextjs app I like to use the @vercel/otel package as it configures everything for nextjs. Like that, all the nextjs relevant stuff is already handled.
For my own logs that I have inside an api route or when I want to debug something or log an id or whatever, then I am using the Pino-logger. Pino in general is already pretty nice and there is a package that automatically sends them to OTel when the .env variables are correctly configured. Like that custom logs are also collected
Saint Hubert Jura HoundOP
Oh that sounds pretty nice. Its server side only i assume?
@Saint Hubert Jura Hound Oh that sounds pretty nice. Its server side only i assume?
It is, yea. It reads the .env variables and I have for my otel collector a basic auth header so no unknown apps or persons can ingest data. But only the server has access to those variables.
Technically you can expose it, but I would be very very careful what the client sends you. Imagine in the middle of the night you wake up due to alerts that were triggered by a evil client.. not good
Technically you can expose it, but I would be very very careful what the client sends you. Imagine in the middle of the night you wake up due to alerts that were triggered by a evil client.. not good
@B33fb0n3 It is, yea. It reads the .env variables and I have for my otel collector a basic auth header so no unknown apps or persons can ingest data. But only the server has access to those variables.
Technically you can expose it, but I would be very very careful what the client sends you. Imagine in the middle of the night you wake up due to alerts that were triggered by a evil client.. not good
Saint Hubert Jura HoundOP
Yea no indeed xD i mean im sure theres a way to do it securely cuz sentry does something similar. I was wondering if theres a way to hook up sentry errors to otel traces. But that wouldnt include normal info logging on the client side though so not really enough for full observability
Anyway ill check out pino. Would you be okay with sharing your utility function to trigger pino if you use one?
@Saint Hubert Jura Hound Anyway ill check out pino. Would you be okay with sharing your utility function to trigger pino if you use one?
sure, this is my
Some of the commented out things may be needed for you. For example to mix in the current running trace or span id (or if you get something from sentry, then sentry details).
After that you can use pino like normally:
Make sure to install the
logger file:import pino from 'pino';
const isProduction = process.env.NODE_ENV === 'production';
function createLogger(): pino.Logger {
const level = process.env.LOG_LEVEL ?? 'info';
// biome-ignore lint/correctness/noConstantCondition: <explanation>
if (1 > 0) {
const transport = pino.transport({
level,
target: 'pino-opentelemetry-transport',
// options: {
// formatters: {
// log(object: any) {
// const span = trace.getSpan(context.active());
// if (!span) return object;
// const spanContext = span.spanContext();
// return {
// ...object,
// trace_id: spanContext.traceId,
// span_id: spanContext.spanId,
// trace_flags: spanContext.traceFlags,
// };
// },
// },
// },
});
return pino(transport);
}
return pino({
level,
transport: {
target: 'pino-pretty',
options: { colorize: true },
},
});
}
let loggerInstance: pino.Logger | null = null;
export function getLogger(): pino.Logger {
if (!loggerInstance) {
loggerInstance = createLogger();
}
return loggerInstance;
}Some of the commented out things may be needed for you. For example to mix in the current running trace or span id (or if you get something from sentry, then sentry details).
After that you can use pino like normally:
const logger = getLogger();
logger.info({something: "cool"}, "Oh hell no: something cool appeared")Make sure to install the
pino-opentelemetry-transpor packagewhich project? 👀
Saint Hubert Jura HoundOP
Im about to launch a (game) server hosting platform myself and am slightly terrified. Lot of stuff i havent even thought about yet like i just saw on ur site u display the vat, getting that to work across regions gotta be hard. Im sure theres a bunch more like that too
@B33fb0n3 which project? 👀
Saint Hubert Jura HoundOP
Netcup
ah yea, can be quite hard sometimes tbh
@B33fb0n3 ah yea, can be quite hard sometimes tbh
Saint Hubert Jura HoundOP
Yeah especially when ur doing business internationally it seems
yea, international stuff is a whole different thing. If you want to start, start slow in your region, then get bigger and bigger and eventually you will be a big player as well
@B33fb0n3 yea, international stuff is a whole different thing. If you want to start, start slow in your region, then get bigger and bigger and eventually you will be a big player as well
Saint Hubert Jura HoundOP
Yeah im definitely only gonna be in the eu for now
Appreciate the help man