Log Specialist - PST
Los Angeles
Thursday, 23 April 2026
Domain Expertise:Act as the domain "subject matter expert" (SME) for Logs, staying ahead of industry trends like Open. Telemetry (O - Tel), log pipelines (Cribl/ Bind. Pane), and cloud-native logging (Cloud. Watch/ Stackdriver). Articulate the architectural superiority of Dynatrace Grail—specifically how its schema-on-read and index-less storage solve the "cardinality explosion" of modern logs. Competitive Execution:Partner with Account Executives/ S - Es to identify and execute displacement plays against legacy incumbents. Lead high-stakes Proof of Concepts (POCs) that drive Logs consumption that prove ROI by reducing data ingest costs while increasing troubleshooting speed. Build "Economic Value" models that show customers how to optimize their Saas consumption and maximize their Dynatrace investment. Solution Architecture:Expertise in architecting resilient, vendor-neutral log-ingestion frameworks utilizing Fluentd, Logstash, and Open. Telemetry Collector pipelines, etc. Help customers navigate complex log-routing scenarios, ensuring high-value data is prioritized for analytics while low-value data is archived cost-effectively. Consumption Advocacy:Identify "consumption bottlenecks" within existing accounts and proactively provide technical guidance to unlock more log data volume and user adoption. Conduct "Best Practice" workshops focused on log-based alerting, DQL (Dynatrace Query Language) proficiency, and dashboarding. What will help you succeed Minimum Requirements:Education: Bachelor's degree in - Computer science, Engineering, or equivalent practical experience . years in a domain, specialist, pre-sales, professional services, or SRE role, with at least 3 years specifically focused on Log Management or Big Data analytics. Preferred Requirements:Technical Depth: Advanced proficiency in Query Languages (e.g., Splunk SPL, Kusto QL, SQL, or Lucene). Deep understanding of Log Ingestion pipelines and "Telemetry Pipelines" (Cribl, Bind. Plane, Vector). Hands-on experience with Kubernetes, OpenShift, and Serverless logging patterns. Cloud & DevOps: Strong knowledge of AWS, Azure, or GCP logging ecosystems and automation tools like Terraform or AnsibleBusiness Acumen: Ability to translate "bits and bytes" into "dollars and cents"—explaining how log management impacts MTTR and operational overhead. Hands-on exposure to modern observability pipelines, including Cribl solutions or Open. Telemetry logging specifications. Familiarity with the Cribl ecosystem or Open. Telemetry logging specs. Experience with scripting (Python, Go, or Bash) for log transformation and data cleanup. Why you will love being a Dynatracer Dynatrace is a leader in unified observability and security. We provide a culture of excellence with competitive compensation packages designed to recognize and reward performance. Our employees work with the largest cloud providers, including AWS, Microsoft, and Google Cloud, and other leading partners worldwide to create strategic alliances. The Dynatrace platform uses cutting-edge technologies, including our own Davis hypermodal AI, to help our customers modernize and automate cloud operations, deliver software faster and more securely, and enable flawless digital experiences. Over 50% of the Fortune 100 companies are current customers of Dynatrace.