Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
58 changes: 23 additions & 35 deletions apps/webapp/app/db.server.ts
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🚩 Query event handler may not fire with driver adapters

Both the primary and replica clients register $on('query', ...) handlers for query performance monitoring (apps/webapp/app/db.server.ts:220-222 and apps/webapp/app/db.server.ts:342-343). With Prisma's new client engine (engineType = "client") and driver adapters, the query log event behavior may differ from the binary engine — in some adapter configurations, query events may not include duration, params, or query fields, or may not fire at all. The QueryPerformanceMonitor.onQuery() depends on these fields being present. If they're absent, slow query detection silently stops working without any error.

(Refers to lines 220-222)

Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Valid concern. According to Prisma 7 docs, $on('query', ...) events are still supported with the new client engine and driver adapters — the query, params, and duration fields should still be populated. However, this should be verified at runtime in staging before production rollout. Added to the PR's testing checklist.

Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ import {
type PrismaTransactionClient,
type PrismaTransactionOptions,
} from "@trigger.dev/database";
import { PrismaPg } from "@prisma/adapter-pg";
import invariant from "tiny-invariant";
import { z } from "zod";
import { env } from "./env.server";
Expand Down Expand Up @@ -109,21 +110,22 @@ function getClient() {
const { DATABASE_URL } = process.env;
invariant(typeof DATABASE_URL === "string", "DATABASE_URL env var not set");

const databaseUrl = extendQueryParams(DATABASE_URL, {
connection_limit: env.DATABASE_CONNECTION_LIMIT.toString(),
pool_timeout: env.DATABASE_POOL_TIMEOUT.toString(),
connection_timeout: env.DATABASE_CONNECTION_TIMEOUT.toString(),
application_name: env.SERVICE_NAME,
});
const databaseUrl = new URL(DATABASE_URL);

// Set application_name as a query param on the connection string (pg understands this)
databaseUrl.searchParams.set("application_name", env.SERVICE_NAME);

console.log(`🔌 setting up prisma client to ${redactUrlSecrets(databaseUrl)}`);

const adapter = new PrismaPg({
connectionString: databaseUrl.href,
max: env.DATABASE_CONNECTION_LIMIT,
idleTimeoutMillis: env.DATABASE_POOL_TIMEOUT * 1000,
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 DATABASE_POOL_TIMEOUT incorrectly mapped to idleTimeoutMillis instead of a connection acquisition timeout

The DATABASE_POOL_TIMEOUT env var (default: 60 seconds) was previously passed as Prisma's pool_timeout connection string parameter, which controls how long a query waits for a free connection from the pool when all connections are busy. In the new code, it's mapped to pg Pool's idleTimeoutMillis, which controls how long an idle connection sits in the pool before being disconnected — a completely different semantic.

Impact on production behavior
  • Under high load: The connection acquisition timeout is lost entirely. Previously, if all connections were busy, a query would fail with P2024 after 60 seconds. Now, requests will queue indefinitely in the pg Pool waiting for a free connection, potentially causing cascading timeouts and request pile-ups.
  • Under low load: Idle connections will now be closed after 60 seconds of inactivity, which is unrelated to the original intent of the parameter.

The old Prisma pool_timeout has no direct equivalent in pg.Pool. The closest option would be a custom wrapper or using a different pool library that supports acquisition timeouts.

Prompt for agents
The DATABASE_POOL_TIMEOUT env var was previously used as Prisma's pool_timeout (connection acquisition timeout: how long to wait for a free connection when the pool is saturated). It is now incorrectly mapped to pg Pool's idleTimeoutMillis (idle connection eviction: how long idle connections persist before being closed). These serve entirely different purposes.

The same issue exists on line 246 for the replica client.

The pg Pool does not have a built-in connection acquisition timeout option. Options to fix:
1. Remove the idleTimeoutMillis mapping from DATABASE_POOL_TIMEOUT and either use the pg default (10s) or a separate env var for idle timeout. Accept that pg Pool does not have pool acquisition timeout.
2. Use pg Pool's allowExitOnIdle or implement a custom wrapper that enforces an acquisition timeout.
3. Rename or split the env var to make the semantics clear (e.g. DATABASE_IDLE_TIMEOUT for idleTimeoutMillis, and document that pool acquisition timeout is no longer supported).

At minimum, the current mapping is semantically wrong and the env var name DATABASE_POOL_TIMEOUT is misleading when mapped to idleTimeoutMillis.
Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch — this was a real semantic bug. Fixed in commit a59aebc.

The old Prisma pool_timeout was a connection acquisition timeout (how long to wait for a free connection when the pool is saturated). pg.Pool has no direct equivalent for acquisition timeout. I've changed idleTimeoutMillis to use DATABASE_CONNECTION_TIMEOUT instead of DATABASE_POOL_TIMEOUT, since idleTimeoutMillis (idle connection eviction) is semantically closer to a connection timeout than a pool acquisition timeout.

Note that DATABASE_POOL_TIMEOUT is no longer used — the loss of pool acquisition timeout behavior is a known trade-off of moving from Prisma's Rust engine to pg.Pool. Under high load, requests will queue in the pg Pool waiting for a free connection rather than failing after a timeout. This may actually be preferable behavior in many cases, but should be monitored in staging.

connectionTimeoutMillis: env.DATABASE_CONNECTION_TIMEOUT * 1000,
});

const client = new PrismaClient({
datasources: {
db: {
url: databaseUrl.href,
},
},
adapter,
log: [
// events
{
Expand Down Expand Up @@ -233,21 +235,20 @@ function getReplicaClient() {
return;
}

const replicaUrl = extendQueryParams(env.DATABASE_READ_REPLICA_URL, {
connection_limit: env.DATABASE_CONNECTION_LIMIT.toString(),
pool_timeout: env.DATABASE_POOL_TIMEOUT.toString(),
connection_timeout: env.DATABASE_CONNECTION_TIMEOUT.toString(),
application_name: env.SERVICE_NAME,
});
const replicaUrl = new URL(env.DATABASE_READ_REPLICA_URL);
replicaUrl.searchParams.set("application_name", env.SERVICE_NAME);

console.log(`🔌 setting up read replica connection to ${redactUrlSecrets(replicaUrl)}`);

const adapter = new PrismaPg({
connectionString: replicaUrl.href,
max: env.DATABASE_CONNECTION_LIMIT,
idleTimeoutMillis: env.DATABASE_POOL_TIMEOUT * 1000,
connectionTimeoutMillis: env.DATABASE_CONNECTION_TIMEOUT * 1000,
});

const replicaClient = new PrismaClient({
datasources: {
db: {
url: replicaUrl.href,
},
},
adapter,
log: [
// events
{
Expand Down Expand Up @@ -350,19 +351,6 @@ function getReplicaClient() {
return replicaClient;
}

function extendQueryParams(hrefOrUrl: string | URL, queryParams: Record<string, string>) {
const url = new URL(hrefOrUrl);
const query = url.searchParams;

for (const [key, val] of Object.entries(queryParams)) {
query.set(key, val);
}

url.search = query.toString();

return url;
}

function redactUrlSecrets(hrefOrUrl: string | URL) {
const url = new URL(hrefOrUrl);
url.password = "";
Expand Down
8 changes: 1 addition & 7 deletions apps/webapp/app/routes/metrics.ts
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@
import { LoaderFunctionArgs } from "@remix-run/server-runtime";
import { prisma } from "~/db.server";
import { metricsRegister } from "~/metrics.server";

export async function loader({ request }: LoaderFunctionArgs) {
Expand All @@ -13,14 +12,9 @@ export async function loader({ request }: LoaderFunctionArgs) {
}
}

// We need to remove empty lines from the prisma metrics, grafana doesn't like them
const prismaMetrics = (await prisma.$metrics.prometheus()).replace(/^\s*[\r\n]/gm, "");
const coreMetrics = await metricsRegister.metrics();

// Order matters, core metrics end with `# EOF`, prisma metrics don't
const metrics = prismaMetrics + coreMetrics;

return new Response(metrics, {
return new Response(coreMetrics, {
headers: {
"Content-Type": metricsRegister.contentType,
},
Expand Down
211 changes: 0 additions & 211 deletions apps/webapp/app/v3/tracer.server.ts
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🚩 @prisma/instrumentation version not updated alongside Prisma 7 migration

The webapp's package.json still has @prisma/instrumentation: ^6.14.0 (visible in the grep output), while the database package was upgraded to Prisma 7.7.0. The PrismaInstrumentation is still used at apps/webapp/app/v3/tracer.server.ts:37 and registered when INTERNAL_OTEL_TRACE_INSTRUMENT_PRISMA_ENABLED=1. Cross-major-version compatibility between @prisma/instrumentation v6 and @prisma/client v7 with the new client engine is not guaranteed — tracing spans may silently stop being generated or cause runtime errors. This should be verified.

(Refers to line 37)

Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch — updated @prisma/instrumentation from ^6.14.0 to ^7.7.0 in the latest commit (a59aebc) to match the Prisma 7 migration.

Original file line number Diff line number Diff line change
Expand Up @@ -54,9 +54,7 @@ import { LoggerSpanExporter } from "./telemetry/loggerExporter.server";
import { CompactMetricExporter } from "./telemetry/compactMetricExporter.server";
import { logger } from "~/services/logger.server";
import { flattenAttributes } from "@trigger.dev/core/v3";
import { prisma } from "~/db.server";
import { metricsRegister } from "~/metrics.server";
import type { Prisma } from "@trigger.dev/database";
import { performance } from "node:perf_hooks";

export const SEMINTATTRS_FORCE_RECORDING = "forceRecording";
Expand Down Expand Up @@ -330,221 +328,12 @@ function setupMetrics() {

const meter = meterProvider.getMeter("trigger.dev", "3.3.12");

configurePrismaMetrics({ meter });
configureNodejsMetrics({ meter });
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🚩 Prisma metrics fully removed — loss of database pool observability

The PR removes all Prisma $metrics usage: the /metrics endpoint no longer includes Prisma pool metrics (apps/webapp/app/routes/metrics.ts:15-17), and the entire configurePrismaMetrics() function in apps/webapp/app/v3/tracer.server.ts:331 is deleted along with its call. This was the only source of connection pool metrics (pool connections open/busy/idle, query wait times, query durations). With pg.Pool now managing connections, equivalent pool metrics would need to come from the pg Pool instance directly (e.g., pool.totalCount, pool.idleCount, pool.waitingCount). Existing Grafana dashboards using these metrics will silently show stale/zero data.

Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is intentional — the removal of $metrics was explicitly requested by the repo owner as part of this migration. The plan is to just drop Prisma metrics for now rather than rebuild them using pg.Pool stats. Grafana dashboards that depend on prisma_* metrics will need to be updated separately if pool observability is needed in the future.

configureHostMetrics({ meterProvider });

return meter;
}

function configurePrismaMetrics({ meter }: { meter: Meter }) {
// Counters
const queriesTotal = meter.createObservableCounter("db.client.queries.total", {
description: "Total number of Prisma Client queries executed",
unit: "queries",
});
const datasourceQueriesTotal = meter.createObservableCounter("db.datasource.queries.total", {
description: "Total number of datasource queries executed",
unit: "queries",
});
const connectionsOpenedTotal = meter.createObservableCounter("db.pool.connections.opened.total", {
description: "Total number of pool connections opened",
unit: "connections",
});
const connectionsClosedTotal = meter.createObservableCounter("db.pool.connections.closed.total", {
description: "Total number of pool connections closed",
unit: "connections",
});

// Gauges
const queriesActive = meter.createObservableGauge("db.client.queries.active", {
description: "Number of currently active Prisma Client queries",
unit: "queries",
});
const queriesWait = meter.createObservableGauge("db.client.queries.wait", {
description: "Number of queries currently waiting for a connection",
unit: "queries",
});
const totalGauge = meter.createObservableGauge("db.pool.connections.total", {
description: "Open Prisma-pool connections",
unit: "connections",
});
const busyGauge = meter.createObservableGauge("db.pool.connections.busy", {
description: "Connections currently executing queries",
unit: "connections",
});
const freeGauge = meter.createObservableGauge("db.pool.connections.free", {
description: "Idle (free) connections in the pool",
unit: "connections",
});

// Histogram statistics as gauges
const queriesWaitTimeCount = meter.createObservableGauge("db.client.queries.wait_time.count", {
description: "Number of wait time observations",
unit: "observations",
});
const queriesWaitTimeSum = meter.createObservableGauge("db.client.queries.wait_time.sum", {
description: "Total wait time across all observations",
unit: "ms",
});
const queriesWaitTimeMean = meter.createObservableGauge("db.client.queries.wait_time.mean", {
description: "Average wait time for a connection",
unit: "ms",
});

const queriesDurationCount = meter.createObservableGauge("db.client.queries.duration.count", {
description: "Number of query duration observations",
unit: "observations",
});
const queriesDurationSum = meter.createObservableGauge("db.client.queries.duration.sum", {
description: "Total query duration across all observations",
unit: "ms",
});
const queriesDurationMean = meter.createObservableGauge("db.client.queries.duration.mean", {
description: "Average duration of Prisma Client queries",
unit: "ms",
});

const datasourceQueriesDurationCount = meter.createObservableGauge(
"db.datasource.queries.duration.count",
{
description: "Number of datasource query duration observations",
unit: "observations",
}
);
const datasourceQueriesDurationSum = meter.createObservableGauge(
"db.datasource.queries.duration.sum",
{
description: "Total datasource query duration across all observations",
unit: "ms",
}
);
const datasourceQueriesDurationMean = meter.createObservableGauge(
"db.datasource.queries.duration.mean",
{
description: "Average duration of datasource queries",
unit: "ms",
}
);

// Single helper so we hit Prisma only once per scrape ---------------------
async function readPrismaMetrics() {
const metrics = await prisma.$metrics.json();

// Extract counter values
const counters: Record<string, number> = {};
for (const counter of metrics.counters) {
counters[counter.key] = counter.value;
}

// Extract gauge values
const gauges: Record<string, number> = {};
for (const gauge of metrics.gauges) {
gauges[gauge.key] = gauge.value;
}

// Extract histogram values
const histograms: Record<string, Prisma.MetricHistogram> = {};
for (const histogram of metrics.histograms) {
histograms[histogram.key] = histogram.value;
}

return {
counters: {
queriesTotal: counters["prisma_client_queries_total"] ?? 0,
datasourceQueriesTotal: counters["prisma_datasource_queries_total"] ?? 0,
connectionsOpenedTotal: counters["prisma_pool_connections_opened_total"] ?? 0,
connectionsClosedTotal: counters["prisma_pool_connections_closed_total"] ?? 0,
},
gauges: {
queriesActive: gauges["prisma_client_queries_active"] ?? 0,
queriesWait: gauges["prisma_client_queries_wait"] ?? 0,
connectionsOpen: gauges["prisma_pool_connections_open"] ?? 0,
connectionsBusy: gauges["prisma_pool_connections_busy"] ?? 0,
connectionsIdle: gauges["prisma_pool_connections_idle"] ?? 0,
},
histograms: {
queriesWait: histograms["prisma_client_queries_wait_histogram_ms"],
queriesDuration: histograms["prisma_client_queries_duration_histogram_ms"],
datasourceQueriesDuration: histograms["prisma_datasource_queries_duration_histogram_ms"],
},
};
}

meter.addBatchObservableCallback(
async (res) => {
const { counters, gauges, histograms } = await readPrismaMetrics();

// Observe counters
res.observe(queriesTotal, counters.queriesTotal);
res.observe(datasourceQueriesTotal, counters.datasourceQueriesTotal);
res.observe(connectionsOpenedTotal, counters.connectionsOpenedTotal);
res.observe(connectionsClosedTotal, counters.connectionsClosedTotal);

// Observe gauges
res.observe(queriesActive, gauges.queriesActive);
res.observe(queriesWait, gauges.queriesWait);
res.observe(totalGauge, gauges.connectionsOpen);
res.observe(busyGauge, gauges.connectionsBusy);
res.observe(freeGauge, gauges.connectionsIdle);

// Observe histogram statistics as gauges
if (histograms.queriesWait) {
res.observe(queriesWaitTimeCount, histograms.queriesWait.count);
res.observe(queriesWaitTimeSum, histograms.queriesWait.sum);
res.observe(
queriesWaitTimeMean,
histograms.queriesWait.count > 0
? histograms.queriesWait.sum / histograms.queriesWait.count
: 0
);
}

if (histograms.queriesDuration) {
res.observe(queriesDurationCount, histograms.queriesDuration.count);
res.observe(queriesDurationSum, histograms.queriesDuration.sum);
res.observe(
queriesDurationMean,
histograms.queriesDuration.count > 0
? histograms.queriesDuration.sum / histograms.queriesDuration.count
: 0
);
}

if (histograms.datasourceQueriesDuration) {
res.observe(datasourceQueriesDurationCount, histograms.datasourceQueriesDuration.count);
res.observe(datasourceQueriesDurationSum, histograms.datasourceQueriesDuration.sum);
res.observe(
datasourceQueriesDurationMean,
histograms.datasourceQueriesDuration.count > 0
? histograms.datasourceQueriesDuration.sum / histograms.datasourceQueriesDuration.count
: 0
);
}
},
[
queriesTotal,
datasourceQueriesTotal,
connectionsOpenedTotal,
connectionsClosedTotal,
queriesActive,
queriesWait,
totalGauge,
busyGauge,
freeGauge,
queriesWaitTimeCount,
queriesWaitTimeSum,
queriesWaitTimeMean,
queriesDurationCount,
queriesDurationSum,
queriesDurationMean,
datasourceQueriesDurationCount,
datasourceQueriesDurationSum,
datasourceQueriesDurationMean,
]
);
}

function configureNodejsMetrics({ meter }: { meter: Meter }) {
if (!env.INTERNAL_OTEL_NODEJS_METRICS_ENABLED) {
return;
Expand Down
10 changes: 3 additions & 7 deletions apps/webapp/test/runsReplicationBenchmark.producer.ts
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
*/

import { PrismaClient } from "@trigger.dev/database";
import { PrismaPg } from "@prisma/adapter-pg";
import { performance } from "node:perf_hooks";

interface ProducerConfig {
Expand Down Expand Up @@ -91,13 +92,8 @@ function generateError() {
}

async function runProducer(config: ProducerConfig) {
const prisma = new PrismaClient({
datasources: {
db: {
url: config.postgresUrl,
},
},
});
const adapter = new PrismaPg(config.postgresUrl);
const prisma = new PrismaClient({ adapter });

try {
console.log(
Expand Down
7 changes: 4 additions & 3 deletions internal-packages/database/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -5,12 +5,13 @@
"main": "./dist/index.js",
"types": "./dist/index.d.ts",
"dependencies": {
"@prisma/client": "6.14.0",
"@prisma/adapter-pg": "7.7.0",
"@prisma/client": "7.7.0",
"decimal.js": "^10.6.0"
},
"devDependencies": {
"@types/decimal.js": "^7.4.3",
"prisma": "6.14.0",
"prisma": "7.7.0",
"rimraf": "6.0.1"
},
"scripts": {
Expand All @@ -25,4 +26,4 @@
"build": "pnpm run clean && tsc --noEmit false --outDir dist --declaration",
"dev": "tsc --noEmit false --outDir dist --declaration --watch"
}
}
}
Loading
Loading