Log Sentry¶
LogSentry sends task errors to Sentry for real-time error monitoring. Status changes are recorded as breadcrumbs for context.
Use this when your workflows run in production (Lambda, Cloud Run, servers) and you need error tracking beyond local logs.
Note
Requires pip install dotflow[sentry]
Setup¶
pip install dotflow[sentry]
Parameters¶
| Parameter | Type | Default | Description |
|---|---|---|---|
dsn |
str |
— | Sentry DSN for the project (required) |
environment |
str \| None |
None |
Environment tag sent to Sentry |
traces_sample_rate |
float |
0.0 |
Sample rate for performance traces (0.0 to 1.0) |
Basic example¶
from dotflow import Config, DotFlow, action
from dotflow.providers import LogSentry
@action
def extract():
return {"data": "fetched"}
@action
def transform(previous_context):
return {"result": previous_context.storage}
def main():
config = Config(
log=LogSentry(
dsn="https://xxx@sentry.io/123",
environment="production",
),
)
workflow = DotFlow(config=config)
workflow.task.add(step=extract)
workflow.task.add(step=transform)
workflow.start()
return workflow
if __name__ == "__main__":
main()
What gets captured¶
| Event | Sentry action |
|---|---|
| Task status changes (info) | Breadcrumb |
| Task retries (warning) | Breadcrumb |
| Task failures (error) | capture_message with extras |
| Debug events | Ignored |
Each error capture includes:
workflow_id— which workflow failedtask_id— which task failedexception— exception typeattempt— retry attempt numbertraceback— full traceback
Combining with other log providers¶
Sentry is for error monitoring, not general logging. Use it alongside LogDefault or LogOpenTelemetry by choosing one per workflow:
from dotflow import Config, DotFlow
from dotflow.providers import LogSentry
# Production: errors go to Sentry
prod_config = Config(
log=LogSentry(dsn="https://xxx@sentry.io/123", environment="production"),
)
# Development: logs go to console
from dotflow.providers import LogDefault
dev_config = Config(
log=LogDefault(output="console", format="json"),
)